Quote for the day:
“Only those who dare to fail greatly can
ever achieve greatly.” -- Robert F.

Your first instinct may be to try to keep up with all your data, but this may be
a fool's errand. The key to success is to have classification capabilities
everywhere data moves, and rely on your DLP policy to jump in when risk arises.
Automation in data classification is becoming a lifesaver thanks to the power of
AI. AI-powered classification can be faster and more accurate than traditional
ways of classifying data with DLP. Ensure any solution you are evaluating can
use AI to instantly uncover and discover data without human input. ... Data loss
prevention (DLP) technology is the core of any data protection program. That
said, keep in mind that DLP is only a subset of a larger data protection
solution. DLP enables the classification of data (along with AI) to ensure you
can accurately find sensitive data. Ensure your DLP engine can consistently
alert correctly on the same piece of data across devices, networks, and clouds.
The best way to ensure this is to embrace a centralized DLP engine that can
cover all channels at once. Avoid point products that bring their own DLP
engine, as this can lead to multiple alerts on one piece of moving data, slowing
down incident management and response. Look to embrace Gartner's security
service edge approach, which delivers DLP from a centralized cloud
service.

From the start, Bain was crystal clear about its case for change, according to
Razdan. The company prioritized change management, which meant IT partnering
with finance; it also meant cultivating a mindset conducive to change. “We owned
the change; we identified a group of high performers within our finance and our
IT teams. This community of super-users could readily identify and deal with any
of the problems that typically arise in an implementation of this size and
scale,” Mackey said. “This was less just changing their technology; it’s
changing employee behaviors and setting us up for how we want to grow and change
processes going forward.” ... “We actually set up a program to be always
measuring the value,” Razdan said. “You have internal stakeholders, you have
external stakeholders, you have partnerships; we kind of built an ecosystem of
governance and partnership that enabled us to keep everybody on the same page
because transparency and communication is critical to success.” Gauging progress
via transparent key performance indicators was all the more impressive, given
that most of this happened during the worldwide, pandemic-driven move to remote
work. “We could assess the implementation, as we went through it, to keep us on
track [and] course correct,” Mackey said.

A significant finding was the non-deterministic nature of large language model
(LLM) security. Prompt injection attacks, a method where attackers manipulate
input to provoke undesired responses from AI systems, were found to succeed
unpredictably. An attack that fails 99 times could succeed on the 100th attempt
with identical input, due to the underlying randomness in LLM processing. The
study also revealed substantial risks of data leakage and adversarial
reconnaissance. Attackers using prompt injection can manipulate AI models to
disclose sensitive information or contextual details about the environment in
which the system operates, such as server types and network access
configurations. 'This challenge has given us unprecedented visibility into
real-world tactics attackers are using against AI applications today,' said
Oliver Friedrichs, Co-Founder and Chief Executive Officer of Pangea. 'The scale
and sophistication of attacks we observed reveal the vast and rapidly evolving
nature of AI security threats. Defending against these threats must be a core
consideration for security teams, not a checkbox or afterthought.' Findings
indicated that basic defences, such as native LLM guardrails, left organisations
particularly exposed.

Dynamic DNS (DDNS) services automatically update a domain name's DNS records in
real-time when the Internet service provider changes the IP address. Real-time
updating for DNS records wasn't needed in the early days of the Internet when
static IP addresses were the norm. ... It sounds simple enough, yet bad actors
have abused the services for years. More recently, though, cybersecurity vendors
have observed an increase in such activity, especially this year. The notorious
cybercriminal collective Scattered Spider, for instance, has turned to DDNS to
obfuscate its malicious activity and impersonate well-known brands in social
engineering attacks. This trend has some experts concerned about a rise in abuse
and a surge in "rentable" subdomains. ... In an example of an observed attack,
Scattered Spider actors established a new subdomain, klv1.it[.]com, designed to
impersonate a similar domain, klv1.io, for Klaviyo, a Boston-based marketing
automation company. Silent Push's report noted that the malicious domain had
just five detections on VirusTotal at the time of publication. The company also
said the use of publicly rentable subdomains presents challenges for security
researchers. "This has been something that a lot of threat actors do — they use
these services because they won't have domain registration fingerprints, and it
makes it harder to track them," says Zach Edwards, senior threat researcher at
Silent Push.

To ensure their deepfake attacks are convincing, malicious actors are
increasingly focusing on more believable delivery, enhanced methods, such as
phone number spoofing, SIM swapping, malicious recruitment accounts and
information-stealing malware. These methods allow actors to convincingly deliver
deepfakes and significantly increase a ploy’s overall credibility. ...
High-value deepfake targets, such as C-suite executives, key data custodians, or
other significant employees, often have moderate to high volumes of data
available publicly. In particular, employees appearing on podcasts, giving
interviews, attending conferences, or uploading videos expose significant
volumes of moderate- to high-quality data for use in deepfakes. This dictates
that understanding individual data exposure becomes a key part of accurately
assessing the overall enterprise risk of deepfakes. Furthermore, ACI research
indicates industries such as consulting, financial services, technology,
insurance and government often have sufficient publicly available data to enable
medium-to high-quality deepfakes. Ransomware groups are also continuously
leaking a high volume of enterprise data. This information can help fuel
deepfake content to “talk” about genuine internal documents, employee
relationships and other internal details.
/articles/complex-applications-storage-constraints/en/smallimage/compex-applications-storage-constraints-thumbnail-1747126977374.jpg)
Although we are here focusing on software, it is important to say that software
does not run in a vacuum. Having an understanding of the hardware our programs
run on and even how hardware is developed can offer important insights into how
to tackle programming challenges. In the software world, we have a more
iterative process, new features and fixes can usually be incorporated later in
the form of over-the-air updates, for example. That is not the case with
hardware. Design errors and faults in hardware can at the very best be mitigated
with considerable performance penalties. These errors can introduce the meltdown
and spectre vulnerabilities, or render the whole device unusable. Therefore the
hardware design phase has a much longer and rigorous process before release than
the software design phase. This rigorous process also impacts design decisions
in terms of optimizations and computational power. Once you define a layout and
bill of materials for your device, the expectation is to keep this constant for
production as long as possible in order to reduce costs. Embedded hardware
platforms are designed to be very cost-effective. Designing a product whose
specifications such as memory or I/O count are wasted also means a cost increase
in an industry where every cent in the bill of materials matters.
Proactive risk evaluation is a game-changer for SMBs seeking to maintain robust
insurance coverage. vCISOs conduct regular risk assessments to quantify an
organization’s security posture and benchmark it against industry standards.
This not only identifies areas for improvement but also helps maintain
compliance with evolving insurer expectations. Routine audits—led by vCISOs—keep
security controls effective and relevant. Third-party risk evaluations are
particularly valuable, given the rise in supply chain attacks. By ensuring
vendors meet security standards, SMBs reduce their overall risk profile and
strengthen their position during insurance applications and renewals. Employee
training programs also play a critical role. By educating staff on phishing,
social engineering, and other common threats, vCISOs help prevent incidents
before they occur. ... For SMBs, navigating the cyber insurance landscape is no
longer just a box-checking exercise. Insurers demand detailed evidence of
security measures, continuous improvement, and alignment with industry best
practices. vCISOs bring the technical expertise and strategic perspective
necessary to meet these demands while empowering SMBs to strengthen their
overall security posture.

Because AI introduces risks that traditional GRC frameworks may not fully
address, such as algorithmic bias and lack of transparency and accountability
for AI-driven decisions, an AI GRC framework helps organizations proactively
identify, assess, and mitigate these risks, says Heather Clauson Haughian,
co-founding partner at CM Law, who focuses on AI technology, data privacy, and
cybersecurity. “Other types of risks that an AI GRC framework can help mitigate
include things such as security vulnerabilities where AI systems can be
manipulated or exposed to data breaches, as well as operational failures when AI
errors lead to costly business disruptions or reputational harm,” Haughian says.
... Model governance and lifecycle management are also key components of an
effective AI GRC strategy, Haughian says. “This would cover the entire AI model
lifecycle, from data acquisition and model development to deployment,
monitoring, and retirement,” she says. This practice will help ensure AI models
are reliable, accurate, and consistently perform as expected, mitigating risks
associated with model drift or errors, Haughian says. ... Good policies balance
out the risks and opportunities that AI and other emerging technologies,
including those requiring massive data, can provide, Podnar says. “Most
organizations don’t document their deliberate boundaries via policy,” Podnar
says.

The best defense is a good offense, Thirmal says. Before sharing any sensitive
information, get the consultant to sign a non-disclosure agreement (NDA) and, if
needed, a non-compete agreement. "These legal documents set clear boundaries on
what can and can't do with your ideas." He also recommends retaining records --
meeting notes, emails, and timestamps -- to provide documented proof of when and
where the idea in question was discussed. ... If a consultant takes an idea and
commercializes it, or shares it with a competitor, it's time to consult legal
counsel, Paskalev says. The legal case's strength will hinge on the exact
wording within contracts and documentation. "Sometimes, a well-crafted
cease-and-desist letter is enough; other times, litigation is required." ... The
best way to protect ideas isn't through contracts -- it's by being proactive,
Thirmal advises. "Train your team to be careful about what they share, work with
consultants who have strong reputations, and document everything," he states.
"Protecting innovation isn’t just a legal issue -- it's a strategic one."
Innovation is an IT leader's greatest asset, but it's also highly vulnerable,
Paskalev says. "By proactively structuring consultant agreements, meticulously
documenting every stage of idea development, and being ready to enforce
protection, organizations can ensure their competitive edge."

One of the most overlooked challenges in leadership is the inability to step
back from the work and see the full picture. We become so immersed in the daily
fires, the high-stakes meetings, the make-or-break moments, that we lose the
ability to assess the battlefield objectively. The ocean, or any intense,
immersive activity, provides that critical reset. But stepping away isn't just
about swimming in the ocean. It's about breaking patterns. Leaders are often
stuck in cycles — endless meetings, fire drills, back-to-back calls. The
constant urgency can trick you into believing that everything is critical.
That's why you need moments that pull you out of the daily grind, forcing you to
reset before stepping back in. This is where intentional recovery becomes a
strategic advantage. Top-performing leaders across industries — from venture
capitalists to startup founders — intentionally carve out time for activities
that challenge them in different ways. ... The most effective leaders understand
that managing their energy is just as important as managing their time. When
energy levels dip, cognitive function suffers, and decision-making becomes less
strategic. That's why companies known for their progressive workplace cultures
integrate mindfulness practices, outdoor retreats and wellness programs — not as
perks, but as necessary investments in long-term performance.
No comments:
Post a Comment