Quote for the day:
"The key to success is to focus on
goals, not obstacles." -- Unknown

“Unlike RPA bots, that follow predefined rules, AI agents are learning from
data, making decisions, and adapting to changing business logic,” Khan says. “AI
agents are being used for more flexible tasks such as customer interactions,
fraud detection, and predictive analytics.” Kahn sees RPA’s role shifting in the
next three to five years, as AI agents become more prevalent. Many organizations
will embrace hyperautomation, which uses multiple technologies, including RPA
and AI, to automate business processes. “Use cases for RPA most likely will be
integrated into broader AI-powered workflows instead of functioning as
standalone solutions,” he says. ... “RPA isn’t dying — it’s evolving,” he says.
“We’ve tested various AI solutions for process automation, but when you need
something to work the same way every single time —without exceptions, without
interpretations — RPA remains unmatched.” Radich and other automation experts
see AI agents eventually controlling RPA bots, with various robotic processes in
a toolbox for agents to choose from. “Today, we build separate RPA workflows for
different scenarios,” Radich says. “Tomorrow, with our agentic capabilities, an
agent will evaluate an incoming request and determine whether it needs RPA for
data processing, API calls for system integration, or human handoff for complex
decisions.”

SOCs deal with tens of thousands of alerts every day. It’s more than any person
can realistically keep up with. When too much data comes in at once, things get
missed. Responses slow down and, over time, the constant pressure can lead to
burnout. ... The trick is to start spotting patterns. Look at what helped in
past investigations. Was it a login from an odd location? An admin running
commands they normally don’t? A device suddenly reaching out to strange domains?
These are the kinds of details that stand out once you understand what typical
system behavior looks like. At first, you won’t. That’s okay. Spend time reading
through old incident reports. Watch how the team reacts to real alerts. Learn
which ones actually spark investigations and which ones get dismissed without a
second glance. ... Start by removing logs and alerts that don’t add value. Many
logs are never looked at because they don’t contain useful information. Logs
showing every successful login might not help if those logins are normal. Some
logs repeat the same information, like system status messages. ... Next, think
about how long to keep different types of logs. Not all logs need to be saved
for the same amount of time. Network traffic logs might only be useful for a few
days because threats usually show up quickly.

The DNS4EU wants to be an alternative to major US-based public DNS services
(like Google and Cloudflare) to boost the EU's digital autonomy by reducing
European reliance on foreign infrastructure. This isn't only an EU-developed
DNS, though. The DNS4EU comes with built-in filters against malicious domains,
like those hosting malware, phishing, or other cybersecurity threats. The home
user version also includes the possibility to block ads and/or adult content.
... The DNS4EU, which the EU ensures "will not be forced on anyone," has been
developed to meet different users' needs. The home users' version is a public
and free DNS resolver that comes with the option to add filters to block ads,
malware, adult content, or all of these, or none. There's also a dedicated
version for government entities and telecom providers that operate within the
European Union. As mentioned earlier, the DNS4EU comes with a built-in filter to
block dangerous traffic alongside the ability to provide regional threat
intelligence. This means that a malicious threat discovered in one country could
be blocked simultaneously across several regions and countries, de facto halting
its spread. ... The Senior Director for European Government and Regulatory
Affairs at the Internet Society, David Frautschy Heredia, also warns against
potential risks related to content filtering, arguing that "safeguards should be
developed to prevent abuse."
AI Canvas is where AgenticOps comes to life. It’s the industry’s first
generative UI built for cross-domain IT operations, unifying NetOps, SecOps, IT,
and executives into one collaborative environment. Powered by real-time
telemetry from Meraki, ThousandEyes, Splunk, and more, AI Canvas brings together
data from across the stack into one intelligent, always-on view. But this isn’t
just visibility. It’s AI already operating. When a service issue hits, AI Canvas
pulls in the right data, connects the dots, and surfaces a live picture of what
matters—before anyone even asks. Every session starts with context, whether
launched by AI or by an IT engineer. Embedded into the AI Canvas is the Cisco AI
Assistant, your interface to the agentic system. Ask a question in natural
language. Dig into root cause. Explore options. The AI Assistant guides you
through diagnostics, decisions, and actions, all grounded in live telemetry. And
when you’re ready to share, just drag your findings into AI Canvas. From there,
with one click you can invite collaborators—and that’s when the canvas comes
fully alive. Every insight becomes part of a shared investigation with AI Canvas
actively thinking, collaborating, and evolving the UI at every step. But it
doesn’t stop at diagnosis—AI Canvas acts. It applies changes, monitors impact
and share outcomes in real time.

Brown believes there are often important lessons that come out of breaches,
whether it’s high-profile ones that end up in textbooks and university courses,
or experiences that can be shared among peers through conference panels and
other events. “Always look for good to come from events. How can you help the
industry forward? Can you help the CISO community?” he says. ... Many
incident-hardened CISOs will shift their approach and their mindset about
experiencing an attack first-hand. “You’ll develop an attack-minded perspective,
where you want to understand your attack surface better than your adversary, and
apply your resources accordingly to insulate against risk,” says Cory Michel, VP
security and IT at AppOmni, who’s been on several incident response teams. In
practice, shifting from defense to offence means preparing for different types
of incidents, be it platform abuse, exploitation or APTs, and tailoring
responses. ... The playbook needs clear guidance on communication, during and
after an incident, because this can be overlooked while dealing with the crisis,
but in the end, it may come to define the lasting impact of a breach that
becomes common knowledge. “Every word matters during a crisis,” says Brown. “Of
what you publish, what you say, how you say it. So, it’s very important to be
prepared for that.”

Open-source AI’s ability to act as an innovation catalyst is proven. What is
unknown is the downside or the paradox that’s being created with the all-out
focus on performance and the ubiquity of platform development and support. At
the center of the paradox for every company building with open-source AI is the
need to keep it open to fuel innovation, yet gain control over security
vulnerabilities and the complexity of compliance. ... Regulatory compliance is
becoming more complex and expensive, further fueling the paradox. Startup
founders, however, tell VentureBeat that the high costs of compliance can be
offset by the data their systems generate. They’re quick to point out that they
do not intend to deliver governance, risk, and compliance (GRC) solutions;
however, their apps and platforms are meeting the needs of enterprises in this
area, especially across Europe. ... “EU AI Act, for example, is starting its
enforcement in February, and the pace of enforcement and fines is much higher
and aggressive than GDPR. From our perspective, we want to help organizations
navigate those frameworks, ensuring they’re aware of the tools available to
leverage AI safely and map them to risk levels dictated by the Act.”

Each container maps to a process ID in Linux. The illusion of separation is
created using kernel namespaces. These namespaces hide resources like
filesystems, network interfaces and process trees. But the kernel remains
shared. That shared kernel becomes the attack surface. And in the event of a
container escape, that attack surface becomes a liability. Common attack vectors
include exploiting filesystem mounts, abusing symbolic links or leveraging
misconfigured privileges. These exploits often target the host itself. Once
inside the kernel, an attacker can affect other containers or the infrastructure
that supports them. This is not just theoretical. Container escapes happen, and
when they do, everything on that node becomes suspect. ... Virtual machines fell
out of favor because of performance overhead and slow startup times. But many of
those drawbacks have since been addressed. Projects leveraging
paravirtualization, for example, now offer performance comparable to containers
while restoring strong workload isolation. Paravirtualization modifies the guest
OS to interact efficiently with the hypervisor. It eliminates the need to
emulate hardware, reducing latency and improving resource usage. Several open
source projects have explored this space, demonstrating that it’s possible to
run containers within lightweight virtual machines.

For many technology-driven sectors, intellectual property lies at their core.
This is particular to the fields of software development, pharmaceuticals, and
design innovation. For companies in these fields, IP theft can have serious
consequences. Unfortunately, cybercriminals increasingly target valuable IP
because it can be sold or used to undermine the original creators. According to
the Verizon 2025 Data Breach Investigation Report, nearly 97 per cent of these
attacks in the Asia-Pacific region are fuelled by social engineering, system
intrusion and web app attacks. This alarming trend highlights the urgent need
for stronger data protection measures. ... While cloud platforms present unique
challenges for securing IP, they also offer some potential solutions. One of the
most effective ways to protect data is through encryption. Encrypting files
before they are uploaded to the cloud ensures that even if unauthorised access
is gained, the data remains unreadable without the proper decryption key. For
organisations that rely on cloud platforms for collaboration, file-level
encryption is crucial. This form of encryption ensures that sensitive data is
protected not just at rest but throughout its entire lifecycle in the cloud.
Many cloud platforms offer built-in encryption tools, but companies can also
implement third-party solutions to enhance the protection of their intellectual
property.

By implementing a data pipeline and prioritizing the optimization and reduction
of data volume before it reaches the SIEM, organizations can stay on budget and
still ensure that all necessary data can be thoroughly examined. Data pipelines
also lead to tangible reductions in both storage and processing expenses. ...
The decrease in the sheer volume of data that the SIEM must handle directly can
significantly reduce the total cost of SIEM operations. In addition to volume
reduction, data pipelines improve the quality of data delivered to SIEMs and
other tools — filtering out repetitive noise and enriching logs for faster
queries, increased relevance, and prioritization of the most critical security
events. Data pipelines also introduce efficiency by automating the collection,
processing, and routing of data. By reducing alert fatigue through intelligent
anomaly detection and prioritization, data pipelines can significantly speed up
incident resolution times. Beyond immediate threat detection and cost savings,
data pipelines also aid in maintaining compliance with privacy regulations like
GDPR, CCPA, and PCI. They help provide clear data lineage, making it easier to
track the origin and transformations of data.

Data diversity refers to the variety and representation of different attributes,
groups, conditions, or contexts within a dataset. It ensures that the dataset
reflects the real-world variability in the population or phenomenon being
studied. The diversity of your data helps ensure that the insights, predictions,
and decisions derived from it are fair, accurate, and generalizable. ... Before
you start your data analysis, it’s important to understand what you want to do
with your data. A keen understanding of your use cases and data applications can
help identify gaps and hypotheses you need to work to solve. It also gives you a
method for seeking the data that fits your specific use case. In the same way,
starting with a clear question provides direction, focus, and purpose to the
whole process of text data analysis. Without one, you’ll inevitably gather
irrelevant data, overlook key variables, or find yourself looking at a dataset
that’s irrelevant to what you actually want to know. ... When certain voices,
topics, or customer segments are over- or underrepresented in the data, models
trained on that data may produce skewed results: misunderstanding user needs,
overlooking key issues, or favoring one group over another. This can result in
poor customer experiences, ineffective personalization efforts, and biased
decision-making.
No comments:
Post a Comment