Quote for the day:
"Absolute identity with one's cause is the first and great condition of successful leadership." -- Woodrow Wilson
What Do You Get When You Hire a Ransomware Negotiator?

Despite calls from law enforcement agencies and some lawmakers urging victims
not to make any ransom payment, the demand for experienced ransomware
negotiators remains high. The negotiators say they provide a valuable service,
even if the victim has no intention to pay. They bring skills into an incident
that aren't usually found in the executive suite - strategies for dealing with
criminals. ... Negotiation is more a thinking game, in which you try to
outsmart the hackers to buy time and ascertain valuable insight, said Richard
Bird, a ransomware negotiator who draws much of his skills from his past stint
as a law enforcement crises aversion expert - talking people out of attempting
suicide or negotiating with kidnappers for the release of hostages. "The
biggest difference is that when you are doing a face-to-face negotiation, you
can pick-up lots of information from a person on their non-verbal
communications such as eye gestures, body movements, but when you are talking
to someone over email or messaging apps that can cause some issues - because
you have got to work out how the person might perceive," Bird said. One
advantage of online negotiation is that it gives the negotiator time to
reflect on what to tell the hackers.
Managing Data Security and Privacy Risks in Enterprise AI

While enterprise AI presents opportunities to achieve business goals in a way
not previously conceived, one should also understand and mitigate potential
risks associated with its development and use. Even AI tools designed with the
most robust security protocols may still present a multitude of risks. These
risks include intellectual property theft, privacy concerns when training data
and/or output data may contain personally identifiable information (PII) or
protected health information (PHI), and security vulnerabilities stemming from
data breaches and data tampering. ... Privacy and data security in the context
of AI are interdependent disciplines that often require simultaneous
consideration and action. To begin with, advanced enterprise AI tools are
trained on prodigious amounts of data processed using algorithms that should
be—but are not always—designed to comply with privacy and security laws and
regulations. ... Emerging laws and regulations related to AI are thematically
consistent in their emphasis on accountability, fairness, transparency,
accuracy, privacy, and security. These principles can serve as guideposts when
developing AI governance action plans that can make your organization more
resilient as advances in AI technology continue to outpace the law.
Mastering Prompt Engineering with Functional Testing: A Systematic Guide to Reliable LLM

OutputsCreating efficient prompts for large language models often starts as a
simple task… but it doesn’t always stay that way. Initially, following basic
best practices seems sufficient: adopt the persona of a specialist, write
clear instructions, require a specific response format, and include a few
relevant examples. But as requirements multiply, contradictions emerge, and
even minor modifications can introduce unexpected failures. What was working
perfectly in one prompt version suddenly breaks in another. ... What might
seem like a minor modification can unexpectedly impact other aspects of a
prompt. This is not only true when adding a new rule but also when adding more
detail to an existing rule, like changing the order of the set of instructions
or even simply rewording it. These minor modifications can unintentionally
change the way the model interprets and prioritizes the set of instructions.
The more details you add to a prompt, the greater the risk of unintended side
effects. By trying to give too many details to every aspect of your task, you
increase as well the risk of getting unexpected or deformed results. It is,
therefore, essential to find the right balance between clarity and a high
level of specification to maximise the relevance and consistency of the
response.
You need to prepare for post-quantum cryptography now. Here’s why

"In some respects, we're already too late," said Russ Housley, founder of Vigil
Security LLC, in a panel discussion at the conference. Housley and other
speakers at the conference brought up the lesson from the SHA-1 to SHA-2
hashing-algorithm transition, which began in 2005 and was supposed to take five
years but took about 12 to complete — "and that was a fairly simple transition,"
Housley noted. In a different panel discussion, InfoSec Global Vice President of
Cryptographic Research & Development Vladimir Soukharev called the upcoming
move to post-quantum cryptography a "much more complicated transition than we've
ever seen in cryptographic history." ... The asymmetric algorithms that NIST is
phasing out are thought to be vulnerable to this. The new ones that NIST is
introducing use even more complicated math that quantum computers probably can't
crack (yet). Today, an attacker could watch you log into Amazon and capture the
asymmetrically-encrypted exchange of the symmetric key that secures your
shopping session. But that would be pointless because the attacker couldn't
decrypt that key exchange. In five or 10 years, it'll be a different story. The
attacker will be able to decrypt the key exchange and then use that stolen key
to reveal your shopping session
Network Forensics: A Short Guide to Digital Evidence Recovery from Computer Networks

At a technical level, this discipline operates across multiple layers of the OSI
model. At the lower layers, it examines MAC addresses, VLAN tags, and frame
metadata, while at the network and transport layers, it analyses IP addresses,
routing information, port usage, and TCP/UDP session characteristics. ...
Network communications contain rich metadata in their headers—the “envelope”
information surrounding actual content. This includes IP headers with
source/destination addresses, fragmentation flags, and TTL values; TCP/UDP
headers containing port numbers, sequence numbers, window sizes, and flags; and
application protocol headers with HTTP methods, DNS query types, and SMTP
commands. This metadata remains valuable even when content is encrypted,
revealing communication patterns, timing relationships, and protocol behaviors.
... Encryption presents perhaps the most significant technical challenge for
modern network forensics, with over 95% of web traffic now encrypted using TLS.
Despite encryption, substantial metadata remains visible, including connection
details, TLS handshake parameters, certificate information, and packet sizing
and timing patterns. This observable data still provides significant forensic
value when properly analyzed.
Modernising Enterprise Architecture: Bridging Legacy Systems with Jargon

The growing gap between enterprise-wide architecture and the actual work being
done on the ground leads to manual processes, poor integration, and limits how
effectively teams can work across modern DevOps environments — ultimately
creating the next generation of rigid, hard-to-maintain systems — repeating the
mistakes of the past. ... Instead of treating enterprise architecture as a
walled-off function, Jargon enables continuous integration between high-level
architecture and real-world software design — bridging the gap between
enterprise-wide planning and hands-on development while automating validation
and collaboration. ... Jargon is already working with organisations to bridge
the gap between modern API-first design and legacy enterprise tooling, enabling
teams to modernise workflows without abandoning existing systems. While our
support for OpenAPI and JSON Schema is already in place, we’re planning to add
XMI support to bring Jargon’s benefits to a wider audience of enterprises who
use legacy architecture tools. By supporting XMI, Jargon will allow enterprises
to unlock their existing architecture investments while seamlessly integrating
API-driven workflows. This helps address the challenge of top-down governance
conflicting with bottom-up development needs, enabling smoother collaboration
across teams.
CAIOs are stepping out from the CIO’s shadow

The CAIO position as such is still finding its prime location in the org chart,
Fernández says, often assuming a position of medium-high responsibility in
reporting to the CDO and thus, in turn, to the CIO. “These positions that are
being created are very ‘business partner’ style,” he says, “to make these types
of products understood, what needs they have, and to carry them out.” Casado
adds: “For me, the CIO does not have such a ‘business case’ component — of
impact on the profit and loss account. The role of artificial intelligence is
very closely tied to generating efficiencies on an ongoing basis,” as well as
implying “continuous adoption.” “It is essential that there is this adoption and
that implies being very close to the people,” he says. ... Garnacho agrees,
stating that, in less mature AI development environments, the CIO can assume
CAIO functions. “But as the complexity and scope of AI grows, the specialization
of the CAIO makes the difference,” he says. This is because “although the CIO
plays a fundamental role in technological infrastructure and data management, AI
and its challenges require specific leadership. In our view, the CIO lays the
technological foundations, but it is the CAIO who drives the vision.” In this
emerging division of functions, other positions may be impacted by the emergence
of the AI chief.
Forget About Cloud Computing. On-Premises Is All the Rage Again

Cloud costs have a tendency to balloon over time: Storage costs per GB of data
might seem low, but when you’re dealing with terabytes of data—which even we as
a three-person startup are already doing—costs add up very quickly. Add to this
retrieval and egress fees, and you’re faced with a bill you cannot unsee. Steep
retrieval and egress fees only serve one thing: Cloud providers want to
incentivize you to keep as much data as possible on the platform, so they can
make money off every operation. If you download data from the cloud, it will
cost you inordinate amounts of money. Variable costs based on CPU and GPU usage
often spike during high-performance workloads. A report by CNCF found that
almost half of Kubernetes adopters found that they’d exceeded their budget as a
result. Kubernetes is an open-source container orchestration software that is
often used for cloud deployments. The pay-per-use model of the cloud has its
advantages, but billing becomes unpredictable as a result. Costs can then
explode during usage spikes. Cloud add-ons for security, monitoring, and data
analytics also come at a premium, which often increases costs further. As a
result, many IT leaders have started migrating back to on-premises servers. A
2023 survey by Uptime found that 33% of respondents had repatriated at least
some production applications in the past year.
IT leaders are driving a new cloud computing era

CIOs have become increasingly frustrated with vendor pricing models that lock
them into unpredictable and often unfavorable long-term commitments. Many find
that mounting operational costs frequently outweigh the promised savings from
cloud computing. It’s no wonder that leadership teams are beginning to shift
gears, discussing alternative solutions that might better serve their best
interests. ... Regional or sovereign clouds offer significant advantages,
including compliance with local data regulations that ensure data sovereignty
while meeting industry standards. They reduce latency by placing data centers
nearer to users, enhancing service performance. Security is also bolstered, as
these clouds can apply customized protection measures against specific threats.
Additionally, regional clouds provide customized services that cater to local
needs and industries and offer more responsive customer support than larger
global providers. ... The pushback against traditional cloud providers is not
driven only by unexpected costs; it also reflects enterprise demand for greater
autonomy, flexibility, and a skillfully managed approach to technology
infrastructure. Effectively navigating the complexities of cloud computing will
require organizations to reassess their dependencies and stay vigilant in
seeking solutions that align with their growth strategies.
How Intelligent Continuous Security Enables True End-to-End Security

Intelligent Continuous Security (TM) (ICS) is the next evolution — harnessing
AI-driven automation, real-time threat detection and continuous compliance
enforcement to eliminate these inefficiencies. ICS extends beyond DevSecOps to
also close security gaps with SecOps, ensuring end-to-end continuous security
across the entire software lifecycle. This article explores how ICS enables true
DevOps transformation by addressing the shortcomings of traditional security,
reducing friction across teams, and accelerating secure software
delivery. ... As indicated in the article The Next Generation of Security
“The Future of Security is Continuous. Security isn’t a destination — it’s a
continuous process of learning, adapting and evolving. As threats become
smarter, faster, and more unpredictable, security must follow suit.” Traditional
security practices were designed for a slower, waterfall-style development
process. ... Intelligent Continuous Security (ICS) builds on DevSecOps
principles but goes further by embedding AI-driven security automation
throughout the SDLC. ICS creates a seamless security layer that integrates with
DevOps pipelines, reducing the friction that has long plagued DevSecOps
initiatives. ... ICS shifts security testing left by embedding automated
security checks at every stage of development.
No comments:
Post a Comment