Quote for the day:
"It always seems impossible until it’s done." -- Nelson Mandela
Why data trust is the missing link in digital transformation
Data trust is often framed as a technical issue, delegated to IT or data teams.
In reality, it is a business capability with direct implications for growth,
risk, and reputation. Trusted data enables organisations to: Confidently
automate customer and operational workflows; Personalise experiences
without introducing errors; Improve forecasting and performance reporting;
and Reduce operational rework and exception handling When data cannot be
trusted, leaders are forced to rely on manual checks, conservative assumptions,
and duplicated processes. This increases cost and slows decision-making - the
opposite of what digital transformation aims to achieve. .... Establishing data
trust is not a one-time project. It requires a shift in mindset across the
organisation. Data quality should be viewed as a shared responsibility,
supported by the right processes and tools. Leading organisations embed data
validation into their digital workflows, measure data quality as part of system
health, and treat trusted data as a strategic asset. Over time, this creates a
culture where decisions are made with confidence and transformation initiatives
are more likely to succeed. ... Digital transformation is ultimately about
enabling better decisions, faster execution, and stronger customer
relationships. None of these goals can be achieved without trusted data. As
organisations continue to modernise their platforms and processes, data quality
should be treated as core infrastructure, not an afterthought. Health Data Privacy, Cyber Regs: What to Watch in 2026
When federal regulators hesitate, states often jump into filing privacy and
security gaps involving health data. That includes mandates in New York to shore
up cybersecurity at certain hospitals (see: New York Hospitals Are Facing
Tougher Cyber Rules Than HIPAA). Also worth watching is the New York Health
Information Privacy Act, Greene said. "It was passed by both New York
legislative chambers in January but has not yet been formally submitted to the
governor for signature, with lobbying efforts underway to amend it." "In its
most recent version, it would be the toughest health privacy law in the country
in many respects, including a controversial prohibition on obtaining consents
for secondary uses of data until at least 24 hours after an individual creates
an account or first uses the requested product or service," Greene said.
... Greene predicted HIPAA resolution agreements and civil monetary
penalties will continue much as they have in years past, with one to two dozen
such cases next year. HHS has recently indicated that it intends to begin
enforcing the Information Blocking Rule. "The primary target will be health IT
developers," Greene said. "I expect that there are less information blocking
issues with health information networks and believe that the statute and
regulation's knowledge standard makes it more challenging to enforce against
healthcare providers because the government must prove that a healthcare
provider knew its practice to be unreasonable."From integration pain to partnership gain: How collaboration strengthens cybersecurity
When collaborators leverage data in specific cybersecurity work, they unlock
several valuable benefits, especially since no organization has complete insight
into every possible threat. A shared, data-driven cybersecurity framework can
offer both sides a better understanding of existing and emerging threats that
could undermine one or both collaborators. Data-driven collaboration also
enables partners to become more proactive in their cybersecurity posture.
Coordinated data can give business partners insights into where there’s greater
exposure for a cyberattack, allowing partners to work together with data-backed
guidance on how to better prepare. ... The Vested model — an innovative
approach based on research from the University of Tennessee — focuses on shared
goals and outcomes rather than traditional transactional buyer and seller
agreements. Both companies agreed on a specific set of KPIs they could use to
measure the health of the partnership and keep their security goals on track,
allowing them to continue to adapt cybersecurity initiatives as needs and
threats evolve. “You have to build, maintain and exercise the right partnerships
with business units and shared services across the enterprise so continuity
plans identify the issue quickly, deploy appropriate mitigations, and ultimately
restore client and business services as quickly as possible,” says Royce Curtin,
IBM’s former VP of corporate security.AI governance: A risk and audit perspective on responsible AI adoption
AI governance refers to the policies, procedures, and oversight mechanisms that
guide how AI systems are developed, deployed, and monitored. It ensures that AI
aligns with business objectives, complies with applicable laws, and operates in
a way that is ethical and transparent. Regulatory scrutiny is increasing. The EU
AI Act is setting a precedent for global standards, and U.S. agencies are
signaling more aggressive enforcement, particularly in sectors like healthcare,
finance, and employment. Organizations are expected to demonstrate
accountability in how AI systems make decisions, manage data, and interact with
users. Beyond regulation, there is growing pressure from customers, employees,
and investors. ... Audit teams also help boards and audit committees understand
the risks associated with AI. Their work supports transparency and builds trust
with regulators and stakeholders. As AI becomes more embedded in business
operations, internal audit must expand its scope to include model governance,
data lineage, and ethical risk. ... Organizations that treat AI as a strategic
risk are better positioned to scale it responsibly. Risk and internal audit
teams have a central role in ensuring that AI systems are secure, compliant, and
aligned with business goals. Citrin Cooperman helps organizations navigate AI
adoption with confidence by combining deep risk expertise, practical governance
frameworks, and advanced technology solutions that support secure, scalable, and
compliant growth.Six data shifts that will shape enterprise AI in 2026
While RAG won't entirely disappear in 2026, one approach that will likely
surpass it in terms of usage for agentic AI is contextual memory, also known as
agentic or long-context memory. This technology enables LLMs to store and access
pertinent information over extended periods. Multiple such systems emerged over
the course of 2025 including Hindsight, A-MEM framework, General Agentic Memory
(GAM), LangMem, and Memobase. RAG will remain useful for static data, but
agentic memory is critical for adaptive assistants and agentic AI workflows that
must learn from feedback, maintain state, and adapt over time. In 2026,
contextual memory will no longer be a novel technique; it will become table
stakes for many operational agentic AI deployments. ... In 2025, we saw numerous
innovations, like the notion that an AI is able to parse data from an
unstructured data source like a PDF. That's a capability that has existed for
several years, but proved harder to operationalize at scale than many assumed.
Databricks now has an advanced parser, and other vendors, including Mistral,
have emerged with their own improvements. The same is true with natural language
to SQL translation. While some might have assumed that was a solved problem,
it's one that continued to see innovation in 2025 and will see more in 2026.
It's critical for enterprises to stay vigilant in 2026.
Communicating AI Risk to the Board With Confidence
Most board members can comprehend that AI will drive growth. What they fail to grasp concretely is how the technology introduces a massive amount of exposure. This predicament is typically a result of how information is presented. Security and risk managers (SRMs) often describe AI incidents in the vocabulary of adversarial inputs, model drift, and architecture choices, which matter deeply but rarely answer the questions that directors tackle during their meetings. High-level stakeholders, in reality, are concerned with issues such as revenue protection, operational continuity, and competitive differentiation, creating a gap that requires more than translating acronyms. ... Traditional discussions about technology risk revolve around the triad of confidentiality, integrity, and availability. Boards know these categories well, and over the past few decades, they have learned that cybersecurity failures directly affect the business along these lines. GenAI has formidably challenged this familiar structure, with its associated risks not limited to one of these three domains. ... When the conversation begins with the business consequence, though, the relevance is immediate. The most effective approach involves replacing those mechanics that mean so much to the internal teams with the strategic information boards need to operate. These details open a path for meaningful conversations that encourage directors to think through the implications and make more informed decisions.The six biggest security challenges coming in 2026
For many organizations, cybersecurity and resilience is a compliance exercise.
But it must evolve into “a core intentional cybersecurity capability”, says
Dimitriadis. “In 2026, organizations will need to build the capacity to
anticipate regulatory changes, understand their strategic implications, and
embed them into long-term planning.” ... Attackers are leveraging AI to create
convincing email templates and fake websites “almost indistinguishable” from
real ones – and without the common warning signs employees are trained to
identify, says Mitchell. AI is also being used in vishing attacks, with
deepfakes making it easier to clone the voice of high-ranking company executives
to trick victims. In 2026, there will be more attacks utilizing realistic voice
cloning and high-quality video deepfakes, says Joshua Walsh ... There is a
current shift towards agentic AI that can take real-world actions, such as
adjusting configurations, interacting with APIs, booking services and initiating
financial tasks. This can increase efficiency, but it can also lead to unsafe
decisions made at speed, says rradar’s Walsh. An agent told to "optimize
performance" might disable logging or bypass authentication because it views
security controls as delays, he suggests. Prompt injection is a hidden
issue to look out for, he adds. “If a threat actor slips hidden instructions
into data that the agent consumes, they can make it run actions on internal
systems without anyone realising.”
5 Changes That Will Define AI-Native Enterprises in 2026
As enterprises scale to multi-agent systems, the engineering focus will shift from creating prompts to architecting context. Multi-agent workflows rapidly expand requirements with tool definitions, conversation history, and data from multiple sources. This creates two challenges: context windows fill up, and models suffer from “context rot,” forgetting information buried in lengthy prompts. By mid-2026, context engineering will emerge as a distinct discipline with dedicated teams and specialized infrastructure, serving the minimal but complete information agents need. The best context engineers will understand both LLM constraints and their business domain’s semantic structure. ... Enterprises are realizing that AI agents need both data and meaning. Companies that spent years perfecting data lakes are already finding those assets are insufficient. AI can retrieve data, but without semantic context, it can’t interpret action or intent. That’s why teams will move beyond vector search toward building knowledge graphs, ontologies, and metadata-driven maps that teach AI how their business works. The battleground will shift from owning raw data to owning its interpretation. Off-the-shelf agents will struggle in complex domains because semantics are domain-specific. ... The AI-native enterprise looks very different from what came before. It serves machine customers, treats context as critical infrastructure, and has the tools to escape decades of technical debt.Microsegmentation: the unsung hero of cybersecurity (and why it should be your top priority)
Think of your network like an apartment building. You’ve got a locked front door
— that’s your perimeter. But once someone gets inside, there’s no front desk
checking IDs, no elevator security and the same outdated lock on every unit. An
intruder can roam freely, entering any apartment they choose. Microsegmentation
is the internal security system. It’s the keycard for the elevator, the camera
in the hallway, the unique lock on your door. It’s what stops one compromised
device from becoming a full-blown breach. ... OT environments are different.
They’re often built on legacy systems, lack patching and operate in real-time.
You can’t just drop an agent or reroute traffic without risking downtime. That’s
why agencies need solutions that are agentless, software-defined and tailored to
the unique constraints of OT. Otherwise, you’re only protecting half the house.
... Microsegmentation also plays a critical role in enabling zero trust. It
enforces least privilege at the network level. It’s not just about who gets in;
it’s about what they can touch once they’re inside. For agencies building toward
zero trust, microsegmentation isn’t an afterthought. It’s a foundation. Despite
all this, microsegmentation remains underutilized. According to TechTarget’s
Enterprise Strategy Group, only 36% of organizations use it today, even though
it’s foundational to zero trust. Why? Because 28% believe it’s too complex. But
that perception is often rooted in outdated tooling.
Beyond Chatbots: What Makes an AI Agent Truly Autonomous
Autonomous agents must retain and use context over time. Memory enables an agent
to recall previous interactions, data, and decisions—allowing it to continue a
process seamlessly without restarting each time. That persistence turns single
exchanges into long-running workflows. In enterprise settings, it means an agent
can track a contract review across multiple sessions or follow a complex support
case without losing context. ... Traditional automation runs on fixed,
rule-based workflows. Autonomous agents build and revise their own plans on the
fly, adapting to results and feedback. This ability to plan dynamically—think,
act, observe, and adjust—is what differentiates agentic AI from robotic process
automation (RPA) or prompt chaining. In practice, an agent might be tasked with
analyzing a set of contracts, then automatically decide how to proceed: extract
key terms, assess risk, and summarize results. ... Resilient agents are designed
to operate across models, retry failed actions, or launch sub-agents to handle
specialized work—all within defined guardrails. That adaptability is what
separates a proof of concept (POC) from a production-ready system. ... All the
reasoning in the world means little if an agent can’t execute. Tools are what
translate intelligence into impact. They’re the functions, APIs, and
integrations that allow agents to interact with business systems—searching
systems, generating documents, updating records, or triggering workflows across
CRMs, ERPs, and analytics platforms.
No comments:
Post a Comment