Quote for the day:
"Our expectation in ourselves must be higher than our expectation in others." -- Victor Manuel Rivera
Data 2026 outlook: The rise of semantic spheres of influence
While data started to garnering attention last year, AI and agents continued to
suck up the oxygen. Why the urgency of agents? Maybe it’s “fear of missing out.”
Or maybe there’s a more rational explanation. According to Amazon Web Services
Inc. CEO Matt Garman, agents are the technology that will finally make AI
investments pay off. Go to the 12-minute mark in his recent AWS re:Invent
conference keynote, and you’ll hear him say just that. But are agents yet ready
for prime time? ... And of course, no discussion of agentic interaction
with databases is complete without mention of Model Context Protocol. The
open-source MCP framework, which Anthropic PBC recently donated to the Linux
Foundation, came out of nowhere over the past year to become the de facto
standard for how AI models connect with data. ... There were early advances for
extending governance to unstructured data, primarily documents. IBM
watsonx.governance introduced a capability for curating unstructured data that
transforms documents and enriches them by assigning classifications, data
classes and business terms to prepare them for retrieval-augmented generation,
or RAG. ... But for most organizations lacking deep skills or rigorous
enterprise architecture practices, the starting points for defining semantics is
going straight to the sources: enterprise applications and/or, alternatively,
the newer breed of data catalogs that are branching out from their original
missions of locating and/or providing the points of enforcement for data
governance. In most organizations, the solution is not going to be either-or.Engineering Speed at Scale — Architectural Lessons from Sub-100-ms APIs
Speed shapes perception long before it shapes metrics. Users don’t measure
latency with stopwatches - they feel it. The difference between a 120 ms
checkout step and an 80 ms one is invisible to the naked eye, yet emotionally
it becomes the difference between "smooth" and "slightly annoying". ... In
high-throughput platforms, latency amplifies. If a service adds 30 ms in
normal conditions, it might add 60 ms during peak load, then 120 ms when a
downstream dependency wobbles. Latency doesn’t degrade gracefully; it
compounds. ... A helpful way to see this is through a "latency budget".
Instead of thinking about performance as a single number - say, "API must
respond in under 100 ms" - modern teams break it down across the entire
request path: 10 ms at the edge; 5 ms for routing; 30 ms for application
logic; 40 ms for data access; and 10–15 ms for network hops and
jitter. Each layer is allocated a slice of the total budget. This
transforms latency from an abstract target into a concrete architectural
constraint. Suddenly, trade-offs become clearer: "If we add feature X in the
service layer, what do we remove or optimize so we don’t blow the budget?"
These conversations - technical, cultural, and organizational - are where fast
systems are born. ... Engineering for low latency is really engineering for
predictability. Fast systems aren’t built through micro-optimizations -
they’re built through a series of deliberate, layered decisions that minimize
uncertainty and keep tail latency under control.
A FLOP is a single floating‑point operation, meaning one arithmetic
calculation (add, subtract, multiply, or divide) on numbers that have
decimals. Compute benchmarking is done in floating point/fractional rather
than integer/whole numbers because floating point is far more accurate of a
measure than integers. A prefix is added to FLOPs to measure how many are
performed in a second, starting with mega- (millions) the giga- (billions),
tera- (trillions), peta- (quadrillions), and now exaFLOPs (quintillions). ...
Floating point in computing starts at FP4, or 4 bits of floating point, and
doubles all the way to FP64. There is a theoretical FP128, but it is never
used as a measure. FP64 is also referred to as double-precision floating-point
format, a 64-bit standard under IEEE 754 for representing real numbers with
high accuracy. ... With petaFLOPS and exaFLOPs becoming a marketing term, some
hardware vendors have been less than scrupulous in disclosing what level of
floating-point operation their benchmarks use. It’s not it’s not uncommon for
a company to promote exascale performance and then saying the little fine
print that they’re talking about FP8, according to Snell. “It used to be if
someone said exaFLOP, you could be pretty confident that they meant exaFLOP
according to 64-bit scientific computing, but not anymore, especially in the
field of AI, you need to look at what’s going behind that FLOP,” said
Snell.
Everything you need to know about FLOPs
A FLOP is a single floating‑point operation, meaning one arithmetic
calculation (add, subtract, multiply, or divide) on numbers that have
decimals. Compute benchmarking is done in floating point/fractional rather
than integer/whole numbers because floating point is far more accurate of a
measure than integers. A prefix is added to FLOPs to measure how many are
performed in a second, starting with mega- (millions) the giga- (billions),
tera- (trillions), peta- (quadrillions), and now exaFLOPs (quintillions). ...
Floating point in computing starts at FP4, or 4 bits of floating point, and
doubles all the way to FP64. There is a theoretical FP128, but it is never
used as a measure. FP64 is also referred to as double-precision floating-point
format, a 64-bit standard under IEEE 754 for representing real numbers with
high accuracy. ... With petaFLOPS and exaFLOPs becoming a marketing term, some
hardware vendors have been less than scrupulous in disclosing what level of
floating-point operation their benchmarks use. It’s not it’s not uncommon for
a company to promote exascale performance and then saying the little fine
print that they’re talking about FP8, according to Snell. “It used to be if
someone said exaFLOP, you could be pretty confident that they meant exaFLOP
according to 64-bit scientific computing, but not anymore, especially in the
field of AI, you need to look at what’s going behind that FLOP,” said
Snell.From SBOM to AI BOM: Rethinking supply chain security for AI native software
An effective AI BOM is not a static document generated at release time. It is a
lifecycle artifact that evolves alongside the system. At ingestion, it records
dataset sources, classifications, licensing constraints, and approval status.
During training or fine-tuning, it captures model lineage, parameter changes,
evaluation results, and known limitations. At deployment, it documents inference
endpoints, identity and access controls, monitoring hooks, and downstream
integrations. Over time, it reflects retraining events, drift signals, and
retirement decisions. Crucially, each element is tied to ownership. Someone
approved the data. Someone selected the base model. Someone accepted the
residual risk. This mirrors how mature organizations already think about code
and infrastructure, but extends that discipline to AI components that have
historically been treated as experimental or opaque. To move from theory to
practice, I encourage teams to treat the AI BOM as a “Digital Bill of Lading, a
chain-of-custody record that travels with the artifact and proves what it is,
where it came from, and who approved it. The most resilient operations
cryptographically sign every model checkpoint and the hash of every dataset. By
enforcing this chain of custody, they’ve transitioned from forensic guessing to
surgical precision. When a researcher identifies a bias or security flaw in a
specific open-source dataset, an organization with a mature AI BOM can instantly
identify every downstream product affected by that “raw material” and act within
hours, not weeks.
Beyond the Firehose: Operationalizing Threat Intelligence for Effective SecOps
Effective operationalization doesn't happen by accident. It requires a
structured approach that aligns intelligence gathering with business risks. A
framework for operationalizing threat intelligence structures the process from
raw data to actionable defence, involving key stages like collection,
processing, analysis, and dissemination, often using models like MITRE
ATT&CK and Cyber Kill Chain. It transforms generic threat info into relevant
insights for your organization by enriching alerts, automating workflows (via
SOAR), enabling proactive threat hunting, and integrating intelligence into
tools like SIEM/EDR to improve incident response and build a more proactive
security posture. ... As intel maturity develops, the framework continuously
incorporates feedback mechanisms to refine and adapt to the evolving threat
environment. Cross-departmental collaboration is vital, enabling effective
information sharing and coordinated response capabilities. The framework also
emphasizes contextual integration, allowing organizations to prioritize threats
based on their specific impact potential and relevance to critical assets. This
ultimately drives more informed security decisions. ... Operationalization
should be regarded as an ongoing process rather than a linear progression. If
intelligence feeds result in an excessive number of false positives that
overwhelm Tier 1 analysts, this indicates a failure in operationalization. It is
imperative to institute a formal feedback mechanism from the Security Operations
Center to the Intelligence team.
Compliance vs. Creativity: Why Security Needs Both Rule Books and Rebels
One of the most common tensions in the SOC arises from mismatched expectations.
Compliance officers focus on control documentation when security teams are
focusing on operational signals. For example, a policy may require multi-factor
authentication (MFA), but if the system doesn’t generate alerts on MFA fatigue
or unusual login patterns, attackers can slip past controls without detection.
It’s important to also remember that just because something’s written in a
policy doesn’t mean it’s being protected. A control isn’t a detection. It only
matters if it shows up in the data. Security teams need to make sure that every
big control, like MFA, logging, or encryption, has a signal that tells them when
it’s being misused, misconfigured, or ignored. ... In a modern SOC, competing
priorities are expected. Analysts want manageable alert volumes, red teams want
room to experiment, and managers need to show compliance is covered. And at the
top, CISOs need metrics that make sense to the board. However, high-performing
teams aren’t the ones that ignore these differences. They, again, focus on
alignment. ... The most effective security programs don’t rely solely on rigid
policy or unrestricted innovation. They recognize that compliance offers the
framework for repeatable success, while creativity uncovers gaps and adapts to
evolving threats. When organizations enable both, they move beyond checklist
security.
AI governance through controlled autonomy and guarded freedom
Controlled autonomy in AI governance refers to granting AI systems and their
development teams a defined level of independence within clear, pre-established
boundaries. The organization sets specific guidelines, standards and
checkpoints, allowing AI initiatives to progress without micromanagement but
still within a tightly regulated framework. The autonomy is “controlled” in the
sense that all activities are subject to oversight, periodic review and strict
adherence to organizational policies. ... In practice, controlled autonomy might
involve delegated decision-making authority to AI project teams, but with
mandatory compliance to risk assessment protocols, ethical guidelines and
regulatory requirements. For example, an organization may allow its AI team to
choose algorithms and data sources, but require regular reports and audits to
ensure transparency and accountability. Automated systems may operate
independently, yet their outputs are monitored for biases, errors or security
vulnerabilities. ... Deciding between controlled autonomy and guarded freedom in
AI governance largely depends on the nature of the enterprise, its industry and
the specific risks involved. Controlled autonomy is best suited for sectors
where regulatory compliance and risk mitigation are paramount, such as banking,
healthcare or government services. ... Both controlled autonomy and guarded
freedom offer valuable frameworks for AI governance, each with distinct
strengths and potential drawbacks.
The 20% that drives 80%: Uncovering the secrets of organisational excellence
There are striking universalities in what truly drives impact. The first, which all three prioritise, is the belief that employee experience is inseparable from customer experience. Whether it is called EX = CX or framed differently, the sharp focus on making the workplace purposeful and engaging is foundational. Each business does this in a unique way, but the intent is the same: great employee experience leads to great customer experience. ... The second constant is an unwavering drive for business excellence. This is a nuanced but powerful 20% that shapes 80% of outcomes. Take McDonald’s, for instance: the consistency of quality and service, whether you are in Singapore, India, Japan or the US, is remarkable. Even as we localise, the core excellence remains unchanged. The same is true for Google, where the reliability of Search and breakthroughs in AI define the brand, and for PepsiCo, where high standards across foods and beverages define the brand. ... The third—and perhaps most challenging—is connectedness. For giants of this scale, fostering deep connections across global, regional and country boundaries, and within and across teams, is crucial. It is about psychological safety, collaboration, and creating space for people to connect and recognise each other. This focus on connectedness enables the other two priorities to flourish. If organisations keep these three at the heart of their practice, they remain agile, resilient, and, as I like to put it, the giants keep dancing.Turning plain language into firewall rules
A central feature of the design is an intermediate representation that captures
firewall policy intent in a vendor agnostic format. This representation
resembles a normalized rule record that includes the five tuple plus additional
metadata such as direction, logging, and scheduling. This layer separates intent
from device syntax. Security teams can review the intermediate representation
directly, since it reflects the policy request in structured form. Each field
remains explicit and machine checkable. After the intermediate representation is
built, the rest of the pipeline operates through deterministic logic. The
current prototype includes a compiler that translates the representation into
Palo Alto PAN OS command line configuration. The design supports additional
firewall platforms through separate back end modules. ... A vendor specific
linter applies rules tied to the target firewall platform. In the prototype,
this includes checks related to PAN OS constraints, zone usage, and service
definitions. These checks surface warnings that operators can review. A separate
safety gate enforces high level security constraints. This component evaluates
whether a policy meets baseline expectations such as defined sources,
destinations, zones, and protocols. Policies that fail these checks stop at this
stage. After compilation, the system runs the generated configuration through a
Batfish based simulator. The simulator validates syntax and object references
against a synthetic device model. Results appear as warnings and errors for
inspection.
No comments:
Post a Comment