Quote for the day:
"Average leaders raise the bar on
themselves; good leaders raise the bar for others; great leaders inspire
others to raise their own bar." -- Orrin Woodward

The barrier is not infrastructure, regulation or talent but what the authors
call "learning gap." Most enterprise AI systems cannot retain memory, adapt to
feedback or integrate into workflows. Tools work in isolation, generating
content or analysis in a static way, but fail to evolve alongside the
organizations that use them. For executives, the result is a sea of proofs of
concept with little business impact. "Chatbots succeed because they're easy to
try and flexible, but fail in critical workflows due to lack of memory and
customization," the report said. Many pilots never survive this transition, Mina
Narayanan, research analyst at the Center for Security and Emerging Technology,
told Information Security Media Group. ... The implications of this shadow
economy are complex. On one hand, it shows clear employee demand, as workers
gravitate toward flexible, responsive and familiar tools. On the other, it
exposes enterprises to compliance and security risks. Corporate lawyers and
procurement officers interviewed in the report admitted they rely on ChatGPT for
drafting or analysis, even when their firms purchased specialized tools costing
tens of thousands of dollars. When asked why they preferred consumer tools,
their answers were consistent: ChatGPT produced better outputs, was easier to
iterate with and required less training. "Our purchased AI tool provided rigid
summaries with limited customization options," one attorney told the
researchers.

Think of cybersecurity as a house. While penetration testers and security
engineers focus on building stronger locks and alarm systems, GRC
professionals ensure the house has strong foundations, insurance policies and
meets all building regulations. ... Governance involves creating and
maintaining the policies, procedures and frameworks that guide an
organisation’s security decisions. Risk management focuses on identifying
potential threats, assessing their likelihood and impact, then developing
strategies to mitigate or accept those risks. ... Certifications alone will
not land you a role. This is not understood by most people wanting to take
this path. Understanding key frameworks provides the practical knowledge that
makes certifications meaningful. ISO 27001, the international standard for
information security management systems, appears in most GRC job descriptions.
I spent considerable time learning not only what ISO 27001 requires, but how
organizations implement its controls in practice. The NIST Cybersecurity
Framework (CSF) deserves equal attention. NIST CSF’s six core functions —
govern, identify, protect, detect, respond and recover — provide a logical
structure for organising security programs that business stakeholders can
understand. Personal networks proved more valuable than any job board or
recruitment agency.

Security teams utilize Security Information and Event Management (SIEM)
systems, and DevOps teams have tracing tools. However, infrastructure teams
still lack an equivalent tool: a continuously recorded, objective account of
system interdependencies before, during, and after incidents. This is where
Application Dependency Mapping (ADM) solutions come into play. ADM
continuously maps the relationships between servers, applications, services,
and external dependencies. Instead of relying on periodic scans or manual
documentation, ADM offers real-time, time-stamped visibility. This allows IT
teams to rewind their environment to any specific point in time, clearly
identifying the connections that existed, which systems interacted, and how
traffic flowed during an incident. ... Retrospective visibility is emerging as
a key focus in IT infrastructure management. As hybrid and multi-cloud
environments become increasingly complex, accurately diagnosing failures after
they occur is essential for maintaining uptime, security, and business
continuity. IT professionals must monitor systems in real time and learn how
to reconstruct the complete story when failures happen. Similar to the
aviation industry, which acknowledges that failures can occur and prepares
accordingly, the IT sector must shift from reactive troubleshooting to a
forensic-level approach to visibility.

The GitHub Spark development space is a web application with three panes. The
middle one is for code, the right one shows the running app (and animations as
code is being generated), and the left one contains a set of tools. These
tools offer a range of functions, first letting you see your prompts and skip
back to older ones if you don’t like the current iteration of your
application. An input box allows you to add new prompts that iterate on your
current generated code, with the ability to choose a screenshot or change the
current large language model (LLM) being used by the underlying GitHub Copilot
service. I used the default choice, Anthropic’s Claude Sonnet 3.5. As part of
this feature, GitHub Spark displays a small selection of possible refinements
that take concepts related to your prompts and suggest enhancements to your
code. Other controls provide ways to change low-level application design
options, including the current theme, font, or the style used for application
icons. Other design tools allow you to tweak the borders of graphical
elements, the scaling factors used, and to pick an application icon for an
install of your code based on Progressive Web Apps (PWAs). GitHub Spark has a
built-in key/value store for application data that persists between builds and
sessions. The toolbar provides a list of the current key and the data
structure used for the value store.

In the realm of IT infrastructure, legacy can often feel like a bad word. No
one wants to be told their organization is stuck with legacy IT infrastructure
because it implies that it's old or outdated. Yet, when you actually delve
into the details of what legacy means in the context of servers, networking,
and other infrastructure, a more complex picture emerges. Legacy isn't always
bad. ... it's not necessarily the case that a system is bad, or in dire need
of replacement, just because it fits the classic definition of legacy IT.
There's an argument to be made that, in many cases, legacy systems are worth
keeping around. For starters, most legacy infrastructure consists of
tried-and-true solutions. If a business has been using a legacy system for
years, it's a reliable investment. It may not be as optimal from a cost,
scalability, or security perspective as a more modern alternative. But in some
cases, this drawback is outweighed by the fact that — unlike a new,
as-yet-unproven solution — legacy systems can be trusted to do what they claim
to do because they've already been doing it for years. The fact that legacy
systems have been around for a while also means that it's often easy to find
engineers who know how to work with them. Hiring experts in the latest,
greatest technology can be challenging, especially given the widespread IT
talent shortage.

Despite the advantages, only 42 percent of developers trust the accuracy of AI
output in their workflows. In our observations, this should not come as a
surprise – we’ve seen even the most proficient developers copying and pasting
insecure code from large language models (LLMs) directly into production
environments. These teams are under immense pressure to produce more lines of
code faster than ever. Because security teams are also overworked, they aren’t
able to provide the same level of scrutiny as before, causing overlooked and
possibly harmful flaws to proliferate. The situation brings the potential for
widespread disruption: BaxBench oversees a coding benchmark to evaluate LLMs
for accuracy and security, and has reported that LLMs are not yet capable of
generating deployment-ready code. ... What’s more, they often lack the
expertise – or don’t even know where to begin – to review and validate
AI-enabled code. This disconnect only further elevates their organization’s
risk profile, exposing governance gaps. To keep everything from spinning out
of control, chief information security officers (CISOs) must work with other
organizational leaders to implement a comprehensive and automated governance
plan that enforces policies and guardrails, especially within the repository
workflow.

End-to-end observability is evolving beyond its current role in IT and DevOps
to become a foundational element of modern business strategy. In doing so,
observability plays a critical role in managing risk, maintaining uptime, and
safeguarding digital trust. Observability also enables organizations to
proactively detect anomalies before they escalate into outages, quickly
pinpoint root causes across complex, distributed systems, and automate
response actions to reduce mean time to resolution (MTTR). The result is
faster, smarter and more resilient operations, giving teams the confidence to
innovate without compromising system stability, a critical advantage in a
world where digital resilience and speed must go hand in hand. ... As
organizations increasingly adopt generative and agentic AI to accelerate
innovation, they also expose themselves to new kinds of risks. Agentic AI
can be configured to act independently, making changes, triggering workflows,
or even deploying code without direct human involvement. This level of
autonomy can boost productivity, but it also introduces serious challenges.
... Tomorrow’s industry leaders will be distinguished by their ability to
adopt and adapt to new technologies, embracing agentic AI but recognizing the
heightened risk exposure and compliance burdens. Leaders will need to shift
from reactive operations to proactive and preventative operations.

Fake AI images can lie. But people lie, too, saying real images are fake. Call
it the ‘liar’s dividend.’ Call it a crisis of confidence. ... In 2019, when
deepfake audio and video became a serious problem, legal experts Bobby Chesney
and Danielle Citron came up with the term “liar’s dividend” to describe the
advantage a dishonest public figure gets by calling real evidence “fake” in a
time when AI-generated content makes people question what they see and
hear. False claims of deepfakes can be just as harmful as real deepfakes
during elections. ... The ability to make fakes will be everywhere, along with
the growing awareness that visual information can be easily and convincingly
faked. That awareness makes false claims that something is AI-made more
believable. The good news is that Gemini 2.5 Flash Image stamps every image it
makes or edits with a hidden SynthID watermark for AI identification after
common changes like resizing, rotation, compression, or screenshot copies.
Google says this ID system covers all outputs and ships with the new model
across the Gemini API, Google AI Studio, and Vertex AI. SynthID for images
changes pixels without being seen, but a paired detector can recognize it
later, using one neural network to embed the pattern and another to spot it.
The detector reports levels like “present,” “suspected,” or “not detected,”
which is more helpful than a fragile yes/no that fails after small changes.

Though the models did have these distinct personalities, they also shared
similar strengths and weaknesses. The common strengths were that they quickly
produced syntactically correct code, had solid algorithmic and data structure
fundamentals, and efficiently translated code to different languages. The
common weaknesses were that they all produced a high percentage of
high-severity vulnerabilities, introduced severe bugs like resource leaks or
API contract violations, and had an inherent bias towards messy code. “Like
humans, they become susceptible to subtle issues in the code they generate,
and so there’s this correlation between capability and risk introduction,
which I think is amazingly human,” said Fischer. Another interesting finding
of the report is that newer models may be more technically capable, but are
also more likely to generate risky code. ... In terms of security, high and
low reasoning modes eliminate common attacks like path-traversal and
injection, but replace them with harder-to-detect flaws, like inadequate I/O
error-handling. ... “We have seen the path-traversal and injection become zero
percent,” said Sarkar. “We can see that they are trying to solve one sector,
and what is happening is that while they are trying to solve code quality,
they are somewhere doing this trade-off. Inadequate I/O error-handling is
another problem that has skyrocketed. ...”

Any leader considering agentic AI should have a clear understanding of what it
is (and what it’s not!), which can be difficult considering many organizations
are using the term in different ways. To understand what makes the technology
so transformative, I think it’s helpful to contract it with the tools many
manufacturers are already familiar with. ... Agentic AI doesn’t just help
someone do a task. It owns that task, end-to-end, like a trusted digital
teammate. If a traditional AI solution is like a dashboard, agentic AI is more
like a co-worker who has deep operational knowledge, learns fast, doesn’t need
a break and knows exactly when to ask for help. This is also where
misconceptions tend to creep in. Agentic AI isn’t a chatbot with a nicer
interface that happens to use large language models, nor is it a
one-size-fits-all product that slots in after implementation. It’s a
purpose-built, action-oriented intelligence that lives inside your operations
and evolves with them. ... Agentic AI isn’t a futuristic technology, either.
It’s here and gaining momentum fast. According to Capgemini, the number of
organizations using AI agents has doubled in the past year, with
production-scale deployments expected to reach 48% by 2025. The technology’s
adoption trajectory is a sharp departure from traditional AI technologies.
No comments:
Post a Comment