Quote for the day:
"When your life flashes before your
eyes, make sure you’ve got plenty to watch.” -- Anonymous

Enterprises are navigating an environment where the complexity of IT is
increasing exponentially. Hybrid work requires consistent connectivity across
homes, offices, and campuses. Edge computing and IoT generate massive volumes of
data at distributed sites. Security risks escalate as the attack surface grows.
Traditional, hardware-centric approaches leave IT teams struggling to keep up.
Managing dozens or hundreds of controllers, patching firmware manually, and
troubleshooting issues site by site is not sustainable. Cloud-managed networking
changes that equation. By centralizing management, applying AI-driven
intelligence, and extending visibility across distributed environments, it
enables IT to shift from reactive firefighting to proactive strategy. ...
Enterprises adopting cloud-managed networking are making a decisive shift from
complexity to clarity. Success requires more than technology alone. It demands a
partner that understands how to translate advanced capabilities into measurable
business outcomes. ... Cloud-managed networking is not just another IT trend. It
is the operating model that will define enterprise technology for the next
decade. By elevating the network from infrastructure to strategy, it enables
organizations to move faster, stay secure, and innovate with confidence.

In many respects, shadow AI is a subset of a broader shadow IT problem. Shadow
IT is an issue that emerged more than a decade ago, largely emanating from
employee use of unauthorized cloud apps, including SaaS. Lohrmann noted that
cloud access security broker (CASB) solutions were developed to deal with the
shadow IT issue. These tools are designed to provide organizations with full
visibility of what employees are doing on the network and on protected devices,
while only allowing access to authorized instances. However, shadow AI presents
distinct challenges that CASB tools are unable to adequately address.
“Organizations still need to address other questions related to licensing,
application sprawl, security and privacy policies, procedures and more ..,”
Lohrmann noted. A key difference between IT and AI is the nature of data, the
speed of adoption and the complexity of the underlying technology. In addition,
AI is often integrated into existing IT systems, including cloud applications,
making these tools more difficult to identify. Chuvakin added, “With shadow IT,
unauthorized tools often leave recognizable traces – unapproved applications on
devices, unusual network traffic or access attempts to restricted services.
Shadow AI interactions, however, often occur entirely within a web browser or
personal device, blending seamlessly with regular online activity or not leaving
any trace on any corporate system at all.”

Melding IT and OT networking and security is not a new idea, but it’s one that
has seen growing attention from Cisco. ... Cisco also added a new technology
called AI-powered asset clustering to its Cyber Vision OT management suite.
Cyber Vison keeps track of devices connected to an industrial network, builds
a real-time map of how these devices talk to each other and to IT systems, and
can detect abnormal behavior, vulnerabilities, or policy violations that could
signal malware, misconfigurations, or insider threats, Cisco says. ... Another
significant move that will help IT/OT integration is the planned integration
of the management console for Cisco’s Catalyst and Meraki networks. That
combination will allow IT and OT teams to see the same dashboard for
industrial OT and IT enterprise/campus networks. Cyber Vision will feeds into
the dashboard along with other Cisco management offerings such as
ThousandEyes, which gives customers a shared inventory of assets, traffic
flows and security. “What we are focusing on is helping our customers have the
secure networking foundation and architecture that lets IT teams and
operational teams kind of have one fabric, one architecture, that goes from
the carpeted spaces all the way to the far reaches of their OT network,”
Butaney said.
Most organizations globally include criminal record checks in their
pre-employment screening. Employment and education verifications are also
common, especially in EMEA and APAC. ... “Employers that fail to strengthen
their identity verification processes or overlook recurring discrepancy patterns
could face costly consequences, from compliance failures to reputational harm,”
said Euan Menzies, President and CEO of HireRight. ... More than three-quarters
of businesses globally found at least one discrepancy in a candidate’s
background over the past year. Thirteen percent reported finding one discrepancy
for every five candidates screened. Employment verification remains the area
where most inconsistencies are discovered, especially in APAC and EMEA. These
discrepancies range from minor errors like incorrect dates to more serious
issues such as fabricated job histories. ... Companies are increasingly adopting
post-hire screening to address risks that emerge after someone is hired. In
North America, only 38 percent of companies now say they do no post-hire
screening, a sharp drop from 57 percent last year. Common post-hire checks
include driver monitoring and periodic rescreening for regulated roles. These
efforts help companies catch new issues such as undisclosed criminal activity,
changes in legal eligibility to work, or evolving insider threats.

Some LLMs appear to be designed to encourage long-lasting conversation loops,
with answers often spurring another prompt. ... “When an individual engineer is
prompting an AI, they get a pretty good response pretty quick,” he says. “It
gets in your head, ‘That’s pretty good; surely, I could get to perfect.’ And you
get to the point where it’s the classic sunk-cost fallacy, where the engineer is
like, ‘I’ve spent all this time prompting, surely I can prompt myself out of
this hole.’” The problem often happens when the project lacks definitions of
what a good result looks like, he adds. “Employees who don’t really understand
the goal they’re after will spin in circles not knowing when they should just
call it done or step away,” Farmer says. “The enemy of good is perfect, and LLMs
make us feel like if we just tweak that last prompt a little bit, we’ll get
there.” ... Govindarajan has seen some IT teams get stuck in “doom loops” as
they add more and more instructions to agents to refine the outputs. As
organizations deploy multiple agents, constant tinkering with outputs can slow
down deployments and burn through staff time, he says. “The whole idea of
doomprompting is basically putting that instruction down and hoping that it
works as you set more and more instructions, some of them contradicting with
each other,” he adds. “It comes at the sacrifice of system intelligence.”

“We are rapidly running out of public data that is credible and usable. More and
more enterprises will start to assign value to their data and go beyond
partnerships to monetize it. For example, wind measurements captured by a wind
turbine company could be helpful to many businesses that are not competitors,”
said Olga Kupriyanova, principal consultant of AI and data engineering at ISG.
... "We’re entering a defining moment in AI where access to reliable, scalable,
and ethical data is quickly becoming the central bottleneck, and also the most
valuable asset. As legal and regulatory pressure tightens access to public data,
due to copyright lawsuits, privacy concerns, or manipulation of open data
repositories, enterprises are being forced to rethink where their AI advantage
will come from,” said Farshid Sabet, CEO and co-founder at Corvic AI, developer
of a GenAI management platform. ... The economic consequences of such data loss
are already visible. Analysts estimate that U.S. public data underpinned nearly
$750 billion of business activity as recently as 2022, according to the
Department of Commerce. The loss of such data blinds companies that build models
for everything from supply chain forecasting to investment strategy and
predictions.

The field of AI governance suffers from what Mackenzie et al reaffirm as the
“principal-agent problem,” where one party (the principal) delegates tasks to
another party (the agent). But their interests are not perfectly aligned,
leading to potential conflicts and inefficiencies. ... Architects occupy a
unique position in this landscape. Unlike regulators who may impose constraints
post-design, architects work at the intersection of possibility and constraint.
They must balance competing requirements, such as performance and privacy,
efficiency and equity, speed and safety, within coherent system designs. Every
architectural decision must embed values, priorities, and assumptions about how
systems should behave. ... current AI guidance suffers from systematic
weaknesses: evidence quality is sacrificed for speed, commercial interests
masquerade as objective advice, and some perspectives dominate while broader
stakeholder voices remain unheard ... Architects, being well-placed to bridge
the gap between strategy and technology, hold a key role in establishing the
principles that govern how systems behave, interact, and evolve. In the context
of AI, this principle set extends beyond technical design. It encompasses the
ethical, social, and legal aspects as well. .
“I have to admit that I’m afraid to say that we are going to be busier in the
future than now,” he told host Liz Claman. “And the reason for that is because a
lot of different things that take a long time to do are now faster to do. I’m
always waiting for work to get done because I’ve got more ideas.” ... “The more
productive we are, the more opportunity we get to pursue new ideas,” Huang
continued. Reading between the lines here, it seems the so-called efficiency
gains afforded by AI will mean workers have more work dumped in their laps –
onto the next task, no rest for the wicked, etc. Huang’s comments run counter to
the prevailing sentiment among big tech executives on exactly what AI will
deliver for both enterprises and individual workers. ... We’ve all read the
marketing copy and heard it regurgitated by tech leaders on podcasts and keynote
stages – AI will allow us to focus on the “more rewarding” aspects of our jobs.
They’ve never fully explained what this entails, or how it will pan out in the
workplace. To be quite honest, I don’t think they know what it means. Marketing
probably made it up and they’ve stuck with it. ... Will we be busier spending
time on those rewarding aspects of our jobs? I have to say, I’m doubtful. The
reality is that workers will be pulled into other tasks and merely end up
drowning in the same cumbersome workloads they’ve been dealing with since the
pandemic.

Secure software testing forms the bedrock of resilient applications, proactively
uncovering flaws before they become critical. Early testing practices can
significantly reduce risks, costs, and exposure to threats. According to Global
Market Insights, the growing number and size of data breaches have increased the
need for security testing services. Organizations that heavily use security AI
and automation save an average of USD 1.76 million compared to those that don’t.
About 51% plan to increase their security spending. Early integration of
techniques like Static Application Security Testing (SAST) can detect
vulnerabilities in existing code. It can also help to fix bugs during
development. ... Organizations must verify that their systems handle personal
data securely and comply with global regulations like GDPR and CCPA. Testing
ensures sensitive information is protected from leaks or unauthorized use.
Americans are highly concerned about how companies use their private data. ...
Stress testing evaluates how applications perform under extreme loads. It helps
identify potential failures in scalability, response times, and resource
management. Vulnerability assessments concentrate on uncovering security gaps.
Verified Market Reports notes that, after recent financial crises, governments
are putting stronger emphasis on stress testing.
PromptOps is gaining traction rapidly because it has the potential to address
major challenges in the use of LLMs, such as prompt drift and suboptimal output.
Yet incorporating PromptOps effectively into an organization is far from simple,
requiring a structured and clear process, the right tools, and a mindset that
enables collaboration and effective centralization. Digging deeper into what
PromptOps is, why it is needed, and how it can be implemented effectively can
help companies to find the right approach when incorporating this methodology
for improving their LLM applications usage. ... Before PromptOps is implemented,
an organization typically has prompts scattered across multiple teams and tools,
with no structured management in place. The first stage of implementing
PromptOps involves gathering every detail on LLM applications usage within an
organization. It is essential to understand precisely which prompts are being
used, by which teams, and with which models. The next stage is to build
consistency into this practice by incorporating versioning and testing. Adding
secure access control at this stage is also important, in order to ensure only
those who need it have access to prompts. With these practices in place,
organizations will be well-positioned to introduce cross-model design and embed
core compliance and security practices into all prompt crafting.
No comments:
Post a Comment