Quote for the day:
"Life is 10% what happens to me and 90% of how I react to it." -- Charles Swindoll
Agentic AI exposes what we’re doing wrong
What needs to change is the level of precision and adaptability in network
controls. You need networking that supports fine-grained segmentation,
short-lived connectivity, and policies that can be continuously evaluated rather
than set once and forgotten. You also need to treat east-west traffic visibility
as a core requirement because agents will generate many internal calls that look
legitimate unless you understand intent, identity, and context. ... When the
user is an autonomous agent, control relies solely on identity: what the agent
is, its permitted actions, what it can impersonate, and what it can delegate.
Network location and static IP-based trust weaken when actions are initiated by
software that can run anywhere, scale instantly, and change execution paths.
This is where many enterprises will stumble. ... The old
finops playbook of tagging, showback, and monthly optimization is not enough on its
own. You need near-real-time cost visibility and automated guardrails that stop
waste as it happens, because “later” can mean “after the budget is gone.” Put
differently, the unit economics of agentic systems must be designed, measured,
and controlled like any other production system, ideally more aggressively
because the feedback loop is faster. ... The industry’s favorite myth is that
architecture slows innovation. In reality, architecture prevents innovation from
turning into entropy. Agentic AI accelerates entropy by generating more actions,
integrations, permissions, data movement, and operational variability than
human-driven systems typically do.‘Cute’ and ‘Criminal’: AI Perception, Human Bias, and Emotional Intelligence
Can you build artificial intelligence (AI) without emotional intelligence (EI)?
Should you? What do we mean when we talk about “humans in the loop”? Are we
asking the right questions about how humans design and govern “thinking”
machines? One of the immediate problems we face with generative AI is that
people increasingly rely on them for big decisions. I won’t call all of these
ethical decisions, but in some cases they’re consequential decisions. And many
users forget that these systems are trained on data that carry all kinds of
inherited biases. When we talk about AI bias, it isn’t always abstract. It shows
up in very literal assumptions the models make when they are asked to generate
images or ideas. ... That question is really the beginning of understanding how
these systems work. They are pulling from enormous bodies of unlabeled or
inconsistently labeled data and then inferring patterns. We often forget that
the inferences are statistical, not conceptual. To the model, “doctor” aligns
with “male” because that’s the pattern the dataset reinforced. ... I didn’t tell
the system, “diverse audience,” then all the children it generated fell into the
same narrow “cute child” category. It’s not that the AI systems are racist or
sexist. They simply don’t have self-awareness. They’re reflecting the dominant
patterns in the datasets they learned from. But reflection without critique
becomes reinforcement, and reinforcement becomes norm.AI is quietly poisoning itself and pushing models toward collapse - but there's a cure
According to tech analyst Gartner, AI data is rapidly becoming a classic
Garbage In/Garbage Out (GIGO) problem for users. That's because organizations'
AI systems and large language models (LLMs) are flooded with unverified,
AI‑generated content that cannot be trusted. ... You know this better as AI
slop. While annoying to you and me, it's deadly to AI because it poisons the
LLMs with fake data. The result is what's called in AI circles "Model
Collapse." AI company Aquant defined this trend: "In simpler terms, when AI is
trained on its own outputs, the results can drift further away from reality."
... The analyst argued that enterprises can no longer assume data is
human‑generated or trustworthy by default, and must instead authenticate,
verify, and track data lineage to protect business and financial outcomes.
Ever try to authenticate and verify data from AI? It's not easy. It can be
done, but AI literacy isn't a common skill. ... This situation means that
flawed inputs can cascade through automated workflows and decision systems,
producing worse results. Yes, that's right, if you think AI result bias,
hallucinations, and simple factual errors are bad today, wait until tomorrow.
... Gartner suggested many companies will need stronger mechanisms to
authenticate data sources, verify quality, tag AI‑generated content, and
continuously manage metadata so they know what their systems are actually
consuming.4 Realities of AI Governance
AI has not replaced traditional security work; it has layered new obligations
on top of it. We still have to protect our data and maintain sovereign
assurance through independent audit reports, whether that’s SOC, PCI, ISO, or
other standards. Still, we must today also guide our own teams and vendors on
the use of powerful AI tools. That’s where accountability begins: with the
human or process that touches the data. When the rules are clear, people move
faster and safer; when directives are fuzzy, everything downstream is too—so
we keep policy short, plain, and visible. ... Unless the contract says
otherwise, assume prompts, outputs, or telemetry may be retained for “service
improvement.” Fine-print phrases like “continuous improvement” often mean that
inputs, outputs, or telemetry can be retained or used to tune systems unless
you opt out. To keep reviews consistent, leverage resources like the NIST AI
Risk Management Framework. It provides practical checklists for transparency,
accountability, and monitoring. Remember the AI supply chain: your vendor
depends on model providers, plugins, and open-source components; your risk
includes their dependencies, so cover these in your TPRM process. ...
Boundaries are the difference between safe speed and reckless speed. Start by
defining a short set of data types that must never be pasted into external
tools: regulated PII, confidential customer data, unreleased financials,
source code, or merger and acquisition materials. Map the rest into simple
classes-public, internal, sensitive-and tie each class to approved tools and
use cases.
Your Cache is Hiding a Bad Architecture
UK bill accelerates shift to offensive cyber security
The Cyber Security and Resilience (Network and Information Systems) Bill entered Parliament in late 2025 and is expected to move through the legislative process during 2026. The government has positioned the bill as a major update to the UK's cyber framework for essential services and digital service providers. ... Poyser argued that many companies still lean heavily on defensive tools without validating how those controls perform under attack conditions. "Cybercriminals and state-backed threat actors are acting faster, more aggressively, and with far greater innovation-especially through the use of artificial intelligence-while too many businesses continue to rely on traditional defensive methods. This widening gap must be closed urgently," said Poyser. He also linked the coming UK legislative changes to a push for more proactive security validation. ... The company said this attacker-style approach changes how risk gets measured and prioritised. It said corporate security teams struggle to maintain an accurate picture of exposure through passive controls and periodic checks. "It is increasingly unrealistic for corporate security teams to maintain an accurate understanding of their true risk exposure using only traditional, passive methods," said Keith Poyser. "Threat actors do not wait for annual audits or one-off checks. Unless organisations test their systems in a way that reflects how real attackers operate, they will continue to be caught off-guard," said Poyser.The new CDIO stack: Tech, talent and storytelling
The first layer is the one everyone ‘expects’. We built strong platforms: cloud
infrastructure that can flex with the business, data platforms that bring
together information from plants, systems and markets, analytics and AI
capabilities that sit on top of that data, and a solid cyber posture to protect
all of it. ... The second layer was not about machines at all. It was about
people, about changing the talent mix so that digital is no longer “their” thing
— it becomes “our” thing. We realised that if we kept thinking in terms of “IT
people” and “business people”, we would always be negotiating across a wall. ...
The third layer is the one that surprised even me. We noticed a pattern. Even
when we had good platforms and strong talent, some initiatives would start with
a bang and fizzle out. The technology worked. The pilot results were good. But
momentum died. When we dug deeper, we realised the issue was not in the code. It
was in the story. The operators on the shop floor, the sales teams, the plant
heads and the board were all hearing slightly different stories about “digital”.
... Yes, I am responsible for technology. If the platforms are not robust, I
have failed at the most basic level. Yes, I am responsible for talent. If we
don’t have the right mix of skills — product, data, architecture, change — we
cannot deliver. But I am also responsible for the narrative. ... For me, the
real maturity of a digital organization shows when these three layers are
aligned.
What Software Developers Need to Know About Secure Coding and AI Red Flags
The uptick in adoption of AI tools within the developer community aligns with growing expectations. Developers are now expected to work with greater efficiency to meet deadlines more quickly, all while delivering high-quality code. Developers might find AI assistants to be beneficial as they are immune to human-based tendencies like fatigue and biases, which can boost efficiency. But sacrificing safety for speed is unacceptable, as AI tools bring inherent risks of compromise. ... AI tools are not safe for enterprise use unless the code output is reviewed and implemented by a security-proficient human. 30% of security experts admit that they don't trust the accuracy of code generated by AI itself. That's why security leaders must prioritize the education and upskilling of developer teams, to ensure they have the necessary skills and capabilities to mitigate AI-assisted code vulnerabilities as early as possible. This will lead to the cultivation of a "security first" team culture and safer AI use. ... In addition, agentic AI introduces new or "agentic variations" of existing threats, like memory poisoning, remote code execution (RCE) and code attacks. It can harm code via logic errors, which cause the product to "run" correctly but act incorrectly; style inconsistencies, which result in patterns that do not align with the current, required structure; and lenient permissions, which act correctly but lack the authorization context to determine if an end user is allowed to perform a particular action.Building a Self-Healing Data Pipeline That Fixes Its Own Python Errors
The core concept of this is relatively simple. Most data pipelines are fragile
because they assume the world is perfect, and when the input data changes even
slightly, they fail. Instead of accepting that crash, I designed my script to
catch the exception, capture the “crime scene evidence”, which is basically the
traceback and the first few lines of the file, and then pass it down to an LLM.
... The primary challenge with using Large Language Models for code generation
is their tendency to hallucinate. From my experience, if you ask for a simple
parameter, you often receive a paragraph of conversational text in return. To
stop that, I leveraged structured outputs via Pydantic and OpenAI’s API. This
forces the model to complete a strict form, acting as a filter between the messy
AI reasoning and our clean Python code. ... Getting the prompt right took some
trial and error. And that’s because initially, I only provided the error
message, which forced the model to guess blindly at the problem. I quickly
realized that to correctly identify issues like delimiter mismatches, the model
needed to actually “see” a sample of the raw data. Now here is the big catch.
You cannot actually read the whole file. If you try to pass a 2GB CSV into the
prompt, you’ll blow up your context window and apparently your wallet. ...
First, remember that every time your pipeline breaks, you are making an API
call.
No comments:
Post a Comment