Quote for the day:
"How do you want your story to end? Begin with that end in mind." -- Elizabeth McCormick
Why Architecture Rots No Matter How Good Your Engineers Are
Every architect has seen it. The system starts clean. The design makes sense.
Code reviews are sharp. Engineers are solid. Yet six months later, performance
has slipped. A caching layer breaks quietly. Technical debt shows up despite
everyone’s best intentions. The question isn’t why this happens to bad teams.
The question is why it happens to good teams. ... Rot doesn’t usually come from
bad judgment. It comes from lost context. The information needed to prevent many
problems exists. It’s just scattered across too many files, too many people, and
too many moments in time. No single mind can hold it all. ... Human working
memory holds roughly four chunks of information at once. That isn’t a vibe. It’s
a constraint. And it matters more than we like to admit. When developers read
code, they’re juggling variable state, control flow, call chains, edge cases,
and intent. As the number of mental models increases, onboarding slows and
comprehension drops. Once cognitive load pushes beyond working memory capacity,
understanding doesn’t degrade linearly. It collapses. ... Standards drift
because good intentions don’t scale. The system allows degradation, and the
information needed to prevent it is often invisible at the moment decisions are
made. Architecture decision records are a good example. ADRs capture why you
chose one path over another. They preserve context. In practice, when a
developer is making a change, they rarely stop to consult ADRs.
Quantum Computing and Cybersecurity: The Way Forward for a Quantum-Safe Future
While the timeline for commercial production of a powerful quantum computer is uncertain, most industry insiders agree that it is only a matter of time. In its 2025 report, the Global Risk Institute posits a five to ten year timeframe for the development of Cryptographically Relevant Quantum Computers (CRQC). A quantum-powered adversary may decrypt traffic as it flows, impersonate endpoints or even intercept authentication credentials in transit. The foundational risk begins with intercepting VPN traffic around the world and compromising all HTTPS/SSL certificates. Beyond this, large, distributed Internet of Things (IoT) systems that rely on light-weight encryption would be compromised. Operational Technology (OT) and Industrial Control Systems (ICS) that cannot be upgraded swiftly are likely to be compromised too, jeopardizing vital sectors like healthcare, energy and transportation. HNDL poses a significant risk to long-lasting, sensitive data in finance, healthcare, government and critical infrastructure. These sectors are especially vulnerable due to their extended confidentiality requirements, most of which could be beyond the arrival of quantum computers. Enterprises ignoring this threat now risk future breaches, and regulatory or reputational damage when adversaries deploy quantum decryption. The downstream effects of such breaches could be catastrophic not just to the organization, but to entire ecosystems.Chewing through data access is key to AI adoption
The fact that the generic nature of LLMs can be augmented by contextual data is
a valuable solution to the bottleneck problem. But it presents another problem
in the form of data access. Contextual data might exist, but it is typically
scattered across multiple systems, held in multiple formats and generally stored
heterogeneously. All of this makes data access difficult. Data silos, always a
perennial problem for analytics, have now become a critical roadblock to AI
adoption and value realisation. Another problem comes from compliance
requirements. Many industries, organisations, and jurisdictions regulate how
data is accessed and moved. This is particularly true in industries like
financial services, healthcare, insurance, or government, but it is true to a
greater or lesser extent in all industries. ... Evans suggests that data
federation can provide access to context to feed and augment the generic
training data of models. The result is likely the best approach that
organisations have when facing their AI goals and contending with data access
bottlenecks. “Moving data by default is really something of a brute force
approach. It was needed during the heyday of the data warehouse, but
technologies like Apache Iceberg and Trino make data lakehouses built around
data federation more accessible than ever,” he said. “In the past, data
federation was slower than data centralisation. But in recent years, advances in
Massively Parallel Processing (MPP) mean that technologies built to take
advantage of federation, like Trino, are finally able to make the data
federation dream a reality.”
CSO Barry Hensley on staying a step ahead of the cyber threat landscape
Times have changed as more organizations have either experienced a significant incident firsthand or have seen enough third- and fourth-party breach notifications to take up arms. All these events drive awareness and give credibility to the threats and associated risks. However, there is still a challenge in establishing an appropriate risk tolerance that drives the right investments in effective security controls, especially for budget constrained organizations. ... We do see the evolution of third- and fourth-party risk management, especially in how we validate our security partner’s maturity and resilience. The evolution of risk is partly based on third and fourth parties swapping their underlying technologies to reduce cost or increase efficiencies that a customer has little to no understanding of the risks that might expose. So, for the security functions we’re going to provide internally, we’ll focus on the basics and do them well. With the controls/functions we outsource, we must reimagine not only how we verify our partner environments but how do we actively participate to improve their security programs as well as ours. ... Are we assessing the most relevant risks, rather than the risks of yesterday? And, because we can get so wrapped up in the playbook that we ran in our last organization, how do we ensure the current playbook is relevant to the organization at hand? An example would be how much time we focus on phishing training, which burdens our teammates to be the first line of defense, where we could instead leverage anomaly-based detection to automate the detection and response actions.Dedicated Servers vs. Cloud: Which Is More Secure?
Because the resources under a dedicated server model are yours and yours alone, you won't have to worry about "noisy neighbor" interference or side-channel attacks originating from other tenants, which can be a real risk in cloud server management. With this physical exclusivity, dedicated servers are often attractive for high-risk, compliance-heavy workloads—for example, healthcare, financial services, or government systems. This isolation doesn't just provide a higher standard of performance, but also simplifies your servers' threat surface, especially when possible mechanisms for cyberattacks are removed. ... Cloud servers, by comparison, always operate under a multi-tenant architecture. This means that virtual servers on shared hardware are separated by a hypervisor layer, which creates and manages multiple isolated operating systems in a single server. ... With dedicated servers, you'll have complete control over your operating systems, firewalls, access policies, and encryption. You'll also have the flexibility to set the patch schedule, firewall rules, monitoring tools, and segmentation strategies. ... Cloud servers, on the other hand, always rely on a shared responsibility model. Your vendor will secure the infrastructure, networking, and some parts of the stack. However, you'll still have to manage everything from the operating system (OS) upwards yourself.How threat actors are really using AI
Are we getting to a point where hackers are going to use AI to slowly but surely
circumvent every defense we throw at it? Is this more a case of actors simply
using capabilities, as they have with past technical advances? Or is this entire
concern overblown, meaning the money in our wallets is perfectly safe ... if
only we could remember where we put the darned thing? ... While these early
examples stemmed from the spread of generative AI, the technology has been
sprinkled across attacks as early as 2018. TaskRabbit, the commoditized services
platform owned by Ikea, was the subject of a breach where AI was used to control
a massive botnet that performed a distributed denial-of-service (DDoS) attack on
its servers. The result? Names, passwords, and payment details of both clients
and ‘taskers’ were stolen in an attack that employed machine learning to make it
more efficient and ultimately effective than a simple automated script. ... The
picture isn't uniformly alarming, however, with Meyers suggesting less
sophisticated actors are actually using AI “to their detriment.” He pointed to a
group that created malware called Funk Walker using an adversarial LLM called
Worm GPT. “There was broken cryptography in that, and the adversary left their
name in it,” he explained. “That's kind of on the lower end of the
sophistication spectrum.” The reality, then, is a split between highly capable
state actors leveraging AI for genuine operational advantages, to less skilled
criminals whose efforts to get a leg up via AI assistance have the potential to
backfire through either technical failures or operational security mistakes that
make them that bit easier to track.
StrongestLayer: Top ‘Trusted’ Platforms are Key Attack Surfaces
Rather than relying on malware or obvious phishing techniques, today’s attackers
exploit trust, authentication gaps, and operational dependency. The report
provides rare visibility into the techniques that define modern email threats by
examining only attacks that incumbent security controls missed. “Email security
has reached an inflection point,” said Alan LeFort, CEO and co-founder,
StrongestLayer. “The controls enterprises depend on were designed to detect
patterns and known bad signals. But attackers are now exploiting trusted brands
and legitimate infrastructure, areas that those systems were never built to
reason about.” ... The report thinks that attackers are no longer trying to look
legitimate – they are hiding behind platforms that already are. DocuSign alone
accounted for more than one-fifth of all attacks analyzed, particularly
targeting legal, financial and healthcare organizations where document-signing
workflows are deeply embedded in daily operations. Google Calendar attacks
represent an especially concerning trend. Because calendar invitations are
delivered via calendar APIs rather than email, these attacks bypass secure email
gateways entirely, creating a blind spot for most security teams. ...
StrongestLayer’s analysis shows AI-assisted phishing has fundamentally changed
the economics of detection. Traditional phishing campaigns reuse templates with
high similarity, allowing pattern-based systems to work.
Enterprises are measuring the wrong part of RAG
Across enterprise deployments, the recurring pattern is that freshness
failures rarely come from embedding quality; they emerge when source systems
change continuously while indexing and embedding pipelines update
asynchronously, leaving retrieval consumers unknowingly operating on stale
context. ... In retrieval-centric architectures, governance must operate
at semantic boundaries rather than only at storage or API layers. This
requires policy enforcement tied to queries, embeddings and downstream
consumers — not just datasets. ... In production environments, evaluation
tends to break once retrieval becomes autonomous rather than human-triggered.
Teams continue to score answer quality on sampled prompts, but lack visibility
into what was retrieved, what was missed or whether stale or unauthorized
context influenced decisions. As retrieval pathways evolve dynamically in
production, silent drift accumulates upstream, and by the time issues surface,
failures are often misattributed to model behavior rather than the retrieval
system itself. Evaluation that ignores retrieval behavior leaves organizations
blind to the true causes of system failure. ... Retrieval is no longer a
supporting feature of enterprise AI systems. It is infrastructure. Freshness,
governance and evaluation are not optional optimizations; they are
prerequisites for deploying AI systems that operate reliably in real-world
environments. Data privacy urged as strategic board issue in AI era
"Data privacy is no longer a cybersecurity business control or a risk
mitigation compliance checkbox. It reflects how deeply interconnected the
modern world has become between businesses, governments, travellers, and
citizens. Every interaction, financial transaction, remote authentication, and
geolocation ping generates personal data. That data moves across borders,
clouds, applications, partners, and marketing algorithms at machine speed and
far beyond what most individuals realise in terms of data broker destinations.
As a result, personal data privacy is harder to achieve than at any point in
history, not because of negligence, but because of scale, dependency, design,
and business models design to monetise the information itself," said Haber ...
Bluntly, we have an unusual challenge. Data privacy strategies have not
evolved at the same pace as data creation and monetised analytics.
Organisations still focus on cyber security defences while data flows freely
through APIs, SaaS platforms, AI models, and third-party ecosystems. True
personal data privacy requires visibility into all of this data with control
being assigned to the individual user and not the business or government
entity based on regulations. Without the user knowing who and what is
accessing data, why it is being accessed, and how long the data will be
archived, data privacy will remain an abstract concept with individuals only
loosely being able to opt of data storage and profiling.
Why workers are losing confidence in AI - and what businesses can do about it
While platforms like Claude Code are saving software developers at REACHUM
significant time, not everything is as effective. Tinfow sees a disparity
between how some AI tools are marketed and what they can actually do. Even
working at a company built around AI, Tinfow's team has run into issues with
tasks like text generation in images, where certain AI tools just didn't
deliver. "There's so much noise, and I don't want our team to get distracted
by that, so I'm the one who will take a look at something, decide whether it
is reasonable or garbage, and then give it to the team to work with," Tinfow
said. ... "If you're now starting to look at how you can use AI for the same
task, you all of a sudden have to put a lot more mental effort into trying to
figure out how to do this in a completely different way," Ginn said, "That
loss of the routine, the confidence of how I'm doing it, that can also just go
back to the human nature to avoid change." Additionally, Stefan discussed the
role adequate training plays in maintaining confidence. ... Back at the
digital marketing agency Candour, Farrar said the company has a variety of
tactics to help balance the quest for innovation with the day-to-day
challenges of a technology that still has a way to go. Candour builds in extra
time to account for the fact that everyone is learning, frames experiments as
"test and learn" to mitigate stress, and has appointed a "champion" to stay
abreast of developments in AI.
No comments:
Post a Comment