Quote for the day:
"Your present circumstances don’t
determine where you can go; they merely determine where you start." --
Nido Qubein

You might hear people use these terms interchangeably, but they’re not the same
thing. Visibility is about what you can see – dashboard statistics, logs, uptime
numbers, bandwidth figures, the raw data that tells you what’s happening across
your network. Observability, on the other hand, is about what that data actually
means. It’s the ability to interpret, analyse, and act on those insights. It’s
not just about seeing a traffic spike but instead understanding why it happened.
It’s not just spotting a latency issue, but knowing which apps are affected and
where the bottleneck sits. ... Today, connectivity needs to be smart, agile, and
scalable. It’s about building infrastructure that supports cloud, remote work,
and everything in between. Whether you’re adding a new site, onboarding a remote
team, or launching a cloud-hosted app, your network should be able to scale and
respond at speed. Then there’s security, a non-negotiable layer that protects
your entire ecosystem. Great security isn’t about throwing up walls, it’s about
creating confidence. That means deploying zero trust principles, segmenting
access, detecting threats in real time, and encrypting data, without making
users lives harder. ... Finally, we come to observability. Arguably the most
unappreciated of the three but quickly becoming essential.

Prompt injection is the AI-era equivalent of SQL injection. Attackers craft
malicious inputs to manipulate an LLM, bypass safeguards or extract sensitive
data. These attacks range from simple jailbreak prompts that override safety
rules to more advanced exploits that influence backend systems. ... Model
extraction attacks allow adversaries to systematically query an LLM to
reconstruct its knowledge base or training data, essentially cloning its
capabilities. These attacks often rely on automated scripts submitting
millions of queries to map the model’s responses. One common technique, model
inversion, involves strategically structured inputs that extract sensitive or
proprietary information embedded in the model. Attackers may also use
repeated, incremental queries with slight variations to amass a dataset that
mimics the original training data. ... On the output side, an LLM might
inadvertently reveal private information embedded in its dataset or previously
entered user data. A common risk scenario involves users unknowingly
submitting financial records or passwords into an AI-powered chatbot, which
could then store, retrieve or expose this data unpredictably. With cloud-based
LLMs, the risk extends further. Data from one organization could surface in
another’s responses.

Agentic AI introduces a spectrum of ethical challenges that demand proactive
governance. Given its capacity for independent decision-making, there is a
heightened need for transparent, accountable, and ethically driven AI models.
Ethical governance in Agentic AI revolves around establishing robust policies
that govern decision logic, bias mitigation, and accountability. Organizations
leveraging Agentic AI must prioritize fairness, inclusivity, and regulatory
compliance to avoid unintended consequences. ... The integration of Agentic AI
into business ecosystems promises not just automation but strategic
enhancement of decision-making. These AI agents are designed to process
real-time data, predict market shifts, and autonomously execute decisions that
would traditionally require human intervention. In sectors such as finance,
healthcare, and manufacturing, Agentic AI is optimizing supply chains,
enhancing predictive analytics, and streamlining operations with unparalleled
accuracy. ... One of the major concerns surrounding Agentic AI is data
security. Autonomous decision-making systems require vast amounts of real-time
data to function effectively, raising questions about data privacy, ownership,
and cybersecurity. Cyber threats aimed at exploiting autonomous
decision-making could have severe consequences, especially in sectors like
finance and healthcare.
Digital twins and IIoTs are evolving technologies that are transforming the
digital landscape of supply chain transformation. The IIoT aims to connect to
actual physical sensors and actuators. On the other hand, DTs are replica
copies that virtually represent the physical components. The DTs are
invaluable for testing and simulating design parameters instead of disrupting
production elements. ... Contrary to generic IoT, which is more oriented
towards consumers, the IIoT enables the communication and interconnection
between different machines, industrial devices, and sensors within a supply
chain management ecosystem with the aim of business optimization and
efficiency. The incubation of IIoT in supply chain management systems aims to
enable real-time monitoring and analysis of industrial environments, including
manufacturing, logistics management, and supply chain. It boosts efforts to
increase productivity, cut downtime, and facilitate information and accurate
decision-making. ... A supply chain equipped with IIoT will be a main
ingredient in boosting real-time monitoring and enabling informed
decision-making. Every stage of the supply chain ecosystem will have the
impact of IIoT, like automated inventory management, health monitoring of
goods and their tracking, analytics, and real-time response to meet the
current marketplace.

An important complicating factor in all this is that customers don’t always
know what’s happening in cloud data centers. At the same time, De Jong
acknowledges that on-premises environments have the same problem. “There’s a
spectrum of issues, and a lot of overlap,” he says, something Wesley SwartelĂ©
agrees with: “You have to align many things between on-prem and cloud.” Andre
Honders points to a specific aspect of the cloud: “You can be in a shared
environment with ten other customers. This means you have to deal with
different visions and techniques that do not exist on-premises.” This is
certainly the case. There are plenty of worst case scenarios to consider in
the public cloud. ... However, a major bottleneck remains the lack of
qualified personnel. We hear this all the time when it comes to security. And
in other IT fields too, as it happens, meaning one could draw a society-wide
conclusion. Nevertheless, staff shortages are perhaps more acute in this
sector. Erik de Jong sees society as a whole having similar problems, at any
rate. “This is not an IT problem. Just ask painters. In every company, a small
proportion of the workforce does most of the work.” Wesley SwartelĂ© agrees it
is a challenge for organizations in this industry to find the right people.
“Finding a good IT professional with the right mindset is difficult.
Technology works both ways – it enables the attacker and the smart defender.
Cybercriminals are already capitalising on its potential, using open source AI
models like DeepSeek and Grok to automate reconnaissance, craft sophisticated
phishing campaigns, and produce deepfakes that can convincingly impersonate
executives or business partners. What makes this especially dangerous is that
these tools don’t just improve the quality of attacks; they multiply their
volume. That’s why enterprises need to go beyond reactive defenses and start
embedding AI-aware policies into their core security fabric. It starts with
applying Zero Trust to AI interactions, limiting access based on user roles,
input/output restrictions, and verified behaviour. ... As attackers deploy AI
to craft polymorphic malware and mimic legitimate user behaviour, traditional
defenses struggle to keep up. AI is now a critical part of the enterprise
security toolkit, helping CISOs and security teams move from reactive to
proactive threat defense. It enables rapid anomaly detection, surfaces hidden
risks earlier in the kill chain, and supports real-time incident response by
isolating threats before they can spread. But AI alone isn’t enough. Security
leaders must strengthen data privacy and security by implementing
full-spectrum DLP, encryption, and input monitoring to protect sensitive data
from exposure, especially as AI interacts with live systems.

Digital innovation, growing cyber threats, regulatory pressure, and rising
consumer expectations all drive the need for strong identity proofing and
verification. Here is why it is more important than ever:Combatting Fraud and
Identity Theft: Criminals use stolen identities to open accounts, secure
loans, or gain unauthorized access. Identity proofing is the first defense
against impersonation and financial loss. Enabling Secure Digital Access: As
more services – from banking to healthcare – go digital, strong remote
verification ensures secure access and builds trust in online transactions.
Regulatory Compliance: Laws such as KYC, AML, GDPR, HIPAA, and CIPA require
identity verification to protect consumers and prevent misuse. Compliance is
especially critical in finance, healthcare, and government sectors. Preventing
Account Takeover (ATO): Even legitimate accounts are at risk. Continuous
verification at key moments (e.g., password resets, high-risk actions) helps
prevent unauthorized access via stolen credentials or SIM swapping. Enabling
Zero Trust Security: Zero Trust assumes no inherent trust in users or devices.
Continuous identity verification is central to enforcing this model,
especially in remote or hybrid work environments.
FIDO security keys significantly reduce the risk of phishing, credential theft,
and brute-force attacks. Because they don’t rely on shared secrets like
passwords, they can’t be reused or intercepted. Their phishing-resistant
protocol ensures authentication is only completed with the correct web origin.
FIDO security keys also address insider threats and endpoint vulnerabilities by
requiring physical presence, further enhancing protection, especially in
high-security environments such as healthcare or public administration. ... In
principle, any organization that prioritizes a secure IT infrastructure stands
to benefit from adopting FIDO-based multi-factor authentication. Whether it’s a
small business protecting customer data or a global enterprise managing complex
access structures, FIDO security keys provide a robust, phishing-resistant
alternative to passwords. That said, sectors with heightened regulatory
requirements, such as healthcare, finance, public administration, and critical
infrastructure, have particularly strong incentives to adopt strong
authentication. In these fields, the risk of breaches is not only costly but can
also have legal and operational consequences. FIDO security keys are also ideal
for restricted environments, such as manufacturing floors or emergency rooms,
where smartphones may not be permitted.

Data warehouses and data lakehouses have emerged as two prominent adversaries in
the data storage and analytics markets, each with advantages and disadvantages.
The primary difference between these two data storage platforms is that while
the data warehouse is capable of handling only structured and semi-structured
data, the data lakehouse can store unlimited amounts of both structured and
unstructured data – and without any limitations. ... Traditional data warehouses
have long supported all types of business professionals in their data storage
and analytics endeavors. This approach involves ingesting structured data into a
centralized repository, with a focus on warehouse integration and business
intelligence reporting. Enter the data lakehouse approach, which is vastly
superior for deep-dive data analysis. The lakehouse has successfully blended
characteristics of the data warehouse and the data lake to create a scalable and
unrestricted solution. The key benefit of this approach is that it enables data
scientists to quickly extract insights from raw data with advanced AI tools. ...
Although a data warehouse supports BI use cases and provides a “single source of
truth” for analytics and reporting purposes, it can also become difficult to
manage as new data sources emerge. The data lakehouse has redefined how global
businesses store and process data.
Data and analytics leaders, such as chief data officers, or CDOs, and chief data
and analytics officers, or CDAOs, play a significant role in driving their
organizations' data and analytics, D&A, successes, which are necessary to
show business value from AI projects. Gartner predicts that by 2028, 80% of gen
AI business apps will be developed on existing data management platforms. Their
analysts say, "This is the best time to be in data and analytics," and CDAOs
need to embrace the AI opportunity eyed by others in the C-suite, or they will
be absorbed into other technical functions. With high D&A ambitions and AI
pilots becoming increasingly ubiquitous, focus is shifting toward consistent
execution and scaling. But D&A leaders are overwhelmed with their routine
data management tasks and need a new AI strategy. ... "We've never been good at
governance, and now AI demands that we be even faster, which means you have to
take more risks and be prepared to fail. We have to accept two things: Data will
never be fully governed. Secondly, attempting to fully govern data before
delivering AI is just not realistic. We need a more practical solution like
trust models," Zaidi said. He said trust models provide a trust rating for data
assets by examining their value, lineage and risk. They offer up-to-date
information on data trustworthiness and are crucial for fostering
confidence.
No comments:
Post a Comment