Quote for the day:
"The struggle you're in today is developing the strength you need for tomorrow." -- Elizabeth McCormick
A deep technical dive into going fully passwordless in hybrid enterprise environments
Before we can talk about passwordless authentication, we need to address what
I call the “prerequisite triangle”: cloud Kerberos trust, device registration
and Conditional Access policies. Skip any one of these, and your migration
will stall before it gains momentum. ... Once your prerequisites are in place,
you face critical architectural decisions that will shape your deployment for
years to come. The primary decision point is whether to use Windows Hello for
Business, FIDO2 security keys or phone sign-in as your primary authentication
mechanism. ... The architectural decision also includes determining how you
handle legacy applications that still require passwords. Your options are
limited: implement a passwordless-compatible application gateway, deprecate
the application entirely or use Entra ID’s smart lockout and password
protection features to reduce risk while you transition. ... Start with a
pilot group — I recommend between 50 and 200 users who are willing to accept
some friction in exchange for security improvements. This group should include
IT staff and security-conscious users who can provide meaningful feedback
without becoming frustrated with early-stage issues. ... Recovery mechanisms
deserve special attention. What happens when a user’s device is stolen? What
if the TPM fails? What if they forget their PIN and can’t reach your
self-service portal? Document these scenarios and test them with your help
desk before full rollout. When Cloud Outages Ripple Across the Internet
For consumers, these outages are often experienced as an inconvenience, such
as being unable to order food, stream content, or access online services. For
businesses, however, the impact is far more severe. When an airline’s booking
system goes offline, lost availability translates directly into lost revenue,
reputational damage, and operational disruption. These incidents highlight
that cloud outages affect far more than compute or networking. One of the most
critical and impactful areas is identity. When authentication and
authorization are disrupted, the result is not just downtime; it is a core
operational and security incident. ... Cloud providers are not identity
systems. But modern identity architectures are deeply dependent on
cloud-hosted infrastructure and shared services. Even when an authentication
service itself remains functional, failures elsewhere in the dependency chain
can render identity flows unusable. ... High availability is widely
implemented and absolutely necessary, but it is often insufficient for
identity systems. Most high-availability designs focus on regional failover: a
primary deployment in one region with a secondary in another. If one region
fails, traffic shifts to the backup. This approach breaks down when failures
affect shared or global services. If identity systems in multiple regions
depend on the same cloud control plane, DNS provider, or managed database
service, regional failover provides little protection. In these scenarios, the
backup system fails for the same reasons as the primary.The Art of Lean Governance: Elevating Reconciliation to Primary Control for Data Risk
In today's environment comprising of continuous data ecosystems, governance
based on periodic inspection is misaligned with how data risk emerges. The
central question for boards, regulators, auditors, and risk committees has
shifted: Can the institution demonstrate at the moment data is used that it is
accurate, complete, and controlled? Lean governance answers this question by
elevating data reconciliation from a back-office cleanup activity to the
primary control mechanism for data risk reduction. ... Data profiling can tell
you that a value looks unusual within one system. It cannot tell you whether
that value aligns with upstream sources, downstream consumers, or parallel
representations elsewhere in the enterprise. ... Lean governance
reframes governance as a continual process-control discipline rather than a
documentation exercise. It borrows from established control theory: Quality is
achieved by controlling the process, not by inspecting outputs after failures.
Three principles define this approach: Data risk emerges continuously, not
periodically; Controls must operate at the same cadence as data movement;
and Reconciliation is the control that proves process integrity. ...
Data profiling is inherently inward-looking. It evaluates distributions,
ranges, patterns, and anomalies within a single dataset. This is useful for
hygiene, but insufficient for assessing risk. Reconciliation is inherently
relational. It validates consistency between systems, across transformations,
and through the lifecycle of data.
Working with Code Assistants: The Skeleton Architecture
Critical non-functional requirements- such as security, scalability, performance, and authentication- are system-wide invariants that cannot be fragmented. If every vertical slice is tasked with implementing its own authorization stack or caching strategy, the result is "Governance Drift": inconsistent security postures and massive code redundancy. This necessitates a new unifying concept: The Skeleton and The Tissue. ... The Stable Skeleton represents the rigid, immutable structures (Abstract Base Classes, Interfaces, Security Contexts) defined by the human although possibly built by the AI. The Vertical Tissue consists of the isolated, implementation-heavy features (Concrete Classes, Business Logic) generated by the AI. This architecture draws on two classical approaches: actor models and object-oriented inversion of control. It is no surprise that some of the world’s most reliable software is written in Erlang, which utilizes actor models to maintain system stability. Similarly, in inversion of control structures, the interaction between slices is managed by abstract base classes, ensuring that concrete implementation classes depend on stable abstractions rather than the other way around. ... Prompts are soft; architecture is hard. Consequently, the developer must monitor the agent with extreme vigilance. ... To make the "Director" role scalable, we must establish "Hard Guardrails"- constraints baked into the system that are physically difficult for the AI to bypass. These act as the immutable laws of the application.8-Minute Access: AI Accelerates Breach of AWS Environment
A threat actor gained initial access to the environment via credentials
discovered in public Simple Storage Service (S3) buckets and then quickly
escalated privileges during the attack, which moved laterally across 19 unique
AWS principals, the Sysdig Threat Research Team (TRT) revealed in a report
published Tuesday. ... While the speed and apparent use of AI were among the
most notable aspects of the attack, the researchers also called out the way that
the attacker accessed exposed credentials as a cautionary tale for organizations
with cloud environments. Indeed, stolen credentials are often an attacker's
initial access point to attack a cloud environment. "Leaving access keys in
public buckets is a huge mistake," the researchers wrote. "Organizations should
prefer IAM roles instead, which use temporary credentials. If they really want
to leverage IAM users with long-term credentials, they should secure them and
implement a periodic rotation." Moreover, the affected S3 buckets were named
using common AI tool naming conventions, they noted. The attackers actively
searched for these conventions during reconnaissance, enabling them to find the
credentials quite easily, they said. ... During this privilege-escalation part
of the attack — which took a mere eight minutes — the actor wrote code in
Serbian, suggesting their origin. Moreover, the use of comments, comprehensive
exception handling, and the speed at which the script was written "strongly
suggests LLM generation," the researchers wrote.
Ask the Experts: The cloud cost reckoning
According to the 2025 Azul CIO Cloud Trends Survey & Report, 83% of the 300
CIOs surveyed are spending an average of 30% more than what they had anticipated
for cloud infrastructure and applications; 43% said their CEOs or boards of
directors had concerns about cloud spend. Moreover, 13% of surveyed CIOs said
their infrastructure and application costs increased with their cloud
deployments, and 7% said they saw no savings at all. Other surveys show CIOs are
rethinking their cloud strategies, with "repatriation" -- moving workloads from
the cloud back to on-premises -- emerging as a viable option due to mounting
costs. ... "At Laserfiche we still have a hybrid environment. So we still have a
colocation facility, where we house a lot of our compute equipment. And of
course, because of that, we need a DR site because you never want to put all
your eggs in that one colo. We also have a lot of SaaS services. We're in a
hyperscaler environment for Laserfiche cloud. "But the reason why we do both is
because it actually costs us less money to run our own compute in a data center
colo environment than it does to be all in on cloud." ,,, "The primary reason
why the [cloud] costs have been increasing is because our use of cloud services
has become much more sophisticated and much more integrated. "But another reason
cloud consumption has increased is we're not as diligent in managing our cloud
resources in provisioning and maintaining."
NIST develops playbook for online use cases of digital credentials in financial services
The objective is to develop what a panel description calls a “playbook of
standards and best practices that all parties can use to set a high bar for
privacy and security.” “We really wanted to be able to understand, what does it
actually take for an organization to implement this stuff? How does it fit into
workflows? And then start to think as well about what are the benefits to these
organizations and to individuals.” “The question became, what was the best
online use case?” Galuzzo says. “At which point our colleagues in Treasury kind
of said, hey, our online banking customer identification program, how do we make
that both more usable and more secure at the same time? And it seemed like a
really nice fit. So that brought us to both the kind of scope of what we’re
focused on, those online components, and the specific use case of financial
services as well.” ... The model, he says, “should allow you to engage remotely,
to not have to worry about showing up in person to your closest branch, should
allow for a reduction in human error from our side and should allow for
reduction in fraud and concern over forged documents.” It should also serve to
fulfil the bank’s KYC and related compliance requirements. Beyond the bank, the
major objective with mDLs remains getting people to use them. The AAMVA’s Maru
points to his agency’s digital trust service, and to its efforts in outreach and
education – which are as important in driving adoption as anything on the
technical side.
Designing for the unknown: How flexibility is reshaping data center design
Rapid advances in compute architectures – particularly GPUs and AI-oriented
systems – are compressing technology cycles faster than many design and delivery
processes can respond. In response, flexibility has shifted from a desirable
feature to the core principle of successful data center design. This evolution
is reshaping how we think about structure, power distribution, equipment
procurement, spatial layout, and long-term operability. ... From a design
perspective, this means planning for change across several layers: Structural
systems that can accommodate higher equipment loads without
reinforcement; Spatial layouts that allow reconfiguration of white space
and service zones; and Distribution pathways that support future
modifications without disrupting live operations. The objective is not to
overbuild for every possible scenario, but to provide a framework that can
absorb change efficiently and economically. ... Another emerging challenge is
equipment lead time. While delivery periods vary by system, generators can now
carry lead times approaching 12 months, particularly for higher capacities,
while other major infrastructure components – including transformers, UPS
modules, and switchgear – typically fall within the 30- to 40-week range. Delays
in securing these items can introduce significant risk when procurement
decisions are deferred until late in the design cycle.Onboarding new AI hires calls for context engineering - here's your 3-step action plan
In the AI world, the institutional knowledge is called context. AI agents are
the new rockstar employees. You can onboard them in minutes, not months. And the
more context that you can provide them with, the better they can perform. Now,
when you hear reports that AI agents perform better when they have accurate
data, think more broadly than customer data. The data that AI needs to do the
job effectively also includes the data that describes the institutional
knowledge: context. ... Your employees are good at interpreting it and filling
in the gaps using their judgment and applying institutional knowledge. AI agents
can now parse unstructured data, but are not as good at applying judgment when
there are conflicts, nuances, ambiguity, or omissions. This is why we get
hallucinations. ... The process maps provide visibility into manual activities
between applications or within applications. The accuracy and completeness of
the documented process diagrams vary wildly. Front-office processes are
generally very poor. Back-office processes in regulated industries are typically
very good. And to exploit the power of AI agents, organizations need to
streamline them and optimize their business processes. This has sparked a
process reengineering revolution that mirrors the one in the 1990s. This time
around, the level of detail required by AI agents is higher than for humans.
No comments:
Post a Comment