Quote for the day:
"What you do has far greater impact than what you say." -- Stephen Covey
Predicting the future is easy — deciding what to do is the hard part
The prescriptive analysis assists in developing strategies to optimize
operations, increase profitability, and reduce risks. Traditionally, linear and
non-linear programming models are used for resource allocation, supply chain
management, and portfolio optimization. ... In enterprise decision-making, both
predictive and prescriptive analytics play an important role. Predictive
analytics enables forecasting possible business outcomes, while prescriptive
analytics uses these forecasts to create a strategy to maximize business
profits. However, enterprises often fail to integrate these two analytics
techniques in an effective way for their own benefit. ... The integration of AI
agents in predictive and prescriptive analytics workflows has not been explored
much by data science professionals. However, a consolidated AI agentic framework
can be developed that makes integrated use of predictive and prescriptive
analytics in a combined way. ... On implementing the AI agentic framework, the
industries experienced better forecasts through efficient predictive analytics.
On the other hand, prescriptive analytics helped businesses in making their
workflows more adaptable. Despite this success, high computational costs and
explainability still remain a major challenge. To overcome these setbacks, an
enterprise can further invest in developing multi-modal predictive-prescriptive
AI agents and neuro-symbolic agents.Agile development might be 25 years old, but it’s withstood the test of time – and there’s still more to come in the age of AI
Key focus areas of the Agile Manifesto helped drastically simplify software
development, Reynolds noted. By moving teams to smaller more regular
releases, for example, this “shortened feedback loops” typically associated with
Waterfall and improved flexibility throughout the development lifecycle. “That
reduced risk made it easier to respond to customer and business needs, and
genuinely improved software quality,” he told ITPro. “Smaller changes meant
testing could happen continuously, rather than being bolted on at the end.” The
longevity of Agile methodology is testament to its impact, and research shows
it’s still highly popular. ... According to Kern,
AI and Agile are
“a match made in heaven” and the advent of the technology means this approach is
no longer optional, albeit with a notable caveat. “You need it more than ever,”
he said. “You can build so much more in less time, which can also magnify
potential pitfalls if you’re not careful. The speed of delivery with AI can
easily outpace feedback, but that’s an exciting opportunity, not a flaw.”
Reynolds echoed those comments, noting that while Agile can be a force
multiplier for teams, there are still risks – particularly with the influx of
AI-generated code in software development. “Those gains are often offset
downstream, creating more bugs, higher cloud costs, and greater security
exposure. The real value comes when AI is extended beyond code creation into
testing, quality assurance, and deployment,” he said.CISOs must separate signal from noise as CVE volume soars
“While the number of vulnerabilities goes up, what really matters is which of
these are going to be exploited,” Michael Roytman, co-founder and CTO of
Empirical Security, tells CSO. “And that’s a different process. It does not
depend on the number of vulnerabilities that are out there because sometimes an
exploit is written before the CVE is even out there.” What FIRST’s forecast
highlights instead is a growing signal-to-noise problem, one that strains
already overburdened security teams and raises the stakes for prioritization,
automation, and capacity planning rather than demanding that organizations patch
more flaws exponentially. ... Despite the scale of the forecast, experts stress
that vulnerability volume alone is a poor proxy for enterprise risk. “The risk
to an enterprise is not directly related to the number of vulnerabilities
released,” Empirical Security’s Roytman says. “It is a separate
process.” ... For CISOs, the implication is that patching strategies are
now more about scaling decision-making processes that were already under strain.
... The cybersecurity industry is not facing an explosion of exploitable
weaknesses so much as an explosion of information. For CISOs, success in 2026
will depend less on reacting faster and more on deciding better — using
automation and context to ensure that rising vulnerability counts do not
translate into rising risk. “It hasn’t been a human-scale problem for some time
now,” Roytman says. Strengthening a modern retail cybersecurity strategy
Enterprises might declare robust cybersecurity strategies yet fail to adequately address the threats posed by complex supply chains and aggressive digital transformation efforts. To bridge this gap, at Groupe Rocher, we have chosen to integrate cybersecurity into the core business strategy, ensuring that security measures are not only reactive but also predictive, leveraging threat intelligence to anticipate and mitigate risks effectively. ... It’s also important to remember that vulnerabilities aren’t always about technology. Often, they come from poor practices, like using weak passwords, having too much access, or not using multi-factor authentication (MFA). Criminals might use phishing or social engineering attacks to steal access from their victims. ... Additionally, fostering open communication and collaboration with vendors can help identify potential vulnerabilities early. We regularly organize workshops and joint security drills that can enhance mutual understanding and preparedness. By building strong partnerships and emphasizing shared security goals, brands can create a resilient network that not only protects their interests but also strengthens the entire ecosystem against evolving threats. ... As both regulators and consumers become less accepting of business models that prioritize data above all else, retail and beauty brands need to change how they protect data, focusing more on privacy and transparency.OT Attacks Get Scary With 'Living-off-the-Plant' Techniques
For a number of reasons, ransomware against IT is affecting OT," Derbyshire
explains. "This can occur due to, for example, convergences within the IT
environment, that the OT simply cannot function without relying upon. Or a
complete lack of trust in security controls or network architecture from the IT
or OT security teams, so they voluntarily shut down the OT systems or sever the
connection to kind of prevent the spread [of an IT attack]. Colonial Pipeline
style. ... With a holistic understanding of how OT works, and knowledge of how a
given OT site works, suddenly new threat vectors come into focus, which can
blend with operational systems as elegantly as LotL attacks do Windows or Linux
systems. For instance, Derbyshire plans to demonstrate at RSAC how an attacker
can weaponize S7comm, Siemens' proprietary protocol for communication between
programmable logic controllers (PLCs). He'll show how, by manipulating
frequently overlooked configuration fields in S7comm, an attacker could
potentially leak sensitive data and transmit attacks across devices. He calls it
"an absolute brain melter." ... there are plenty of resources attackers can turn
to to understand OT products better, be they textbooks, chatbots, or even just
buying a PLC on a secondhand marketplace. "It still takes a bit of investment or
a bit of time going out of your way to find these obscure things. But it's never
been impossible and it's only getting easier," Derbyshire says.
The missing layer between agent connectivity and true collaboration
Today's AI challenge is about agent coordination, context, and collaboration.
How do you enable them to truly think together, with all the contextual
understanding, negotiation, and shared purpose that entails? It's a critical
next step toward a new kind of distributed intelligence that keeps humans
firmly in the loop. ... While protocols like MCP and A2A have solved basic
connectivity, and AGNTCY tackles the problems of discovery, identity
management to inter-agent communication and observability, they've only
addressed the equivalent of making a phone call between two people who don't
speak the same language. But Pandey's team has identified something deeper
than technical plumbing: the need for agents to achieve collective
intelligence, not just coordinated actions. ... "We have to mimic human
evolution,” Pandey explained. “In addition to agents getting smarter and
smarter, just like individual humans, we need to build infrastructure that
enables collective innovation, which implies sharing intent, coordination, and
then sharing knowledge or context and evolving that context.” ... Guardrails
remain a central challenge in deploying multi-functional agents that touch
every part of an organization's system. The question is how to enforce
boundaries without stifling innovation. Organizations need strict, rule-like
guardrails, but humans don't actually work that way. Instead, people operate
on a principle of minimal harm, or thinking ahead about consequences and
making contextual judgments.
Continuous Threat Exposure Management, commonly referred to as CTEM, has
become more widely adopted as a way to structure security work around an
organisation's exposure to attack. Even so, only 33% of organisations measure
whether exploitable risk is actually reduced over time, according to the
report. Instead, most programmes continue to track metrics focused on
discovery and volume, such as coverage gaps, asset counts and alert volume.
These measures can show rising activity and expanding scope, but they do not
necessarily show whether the organisation has reduced the likelihood of a
successful attack. "Security programs keep adding tools and expanding scope,
but outcomes aren't improving," said Rogier Fischer, CEO and co-founder of
Hadrian. ... According to the report, these vulnerabilities were not unknown.
They were identified and recorded, but competed for attention as security
teams dealt with new alerts, new tickets and the ongoing output of multiple
tools. In organisations with complex technology estates, this can create a
persistent backlog in which older issues remain unresolved while new potential
risks continue to surface. "Security teams can move fast, but too many tools
and unverified alerts make it difficult to maintain focus on what actually
matters," Fischer said. The report calls for earlier validation of
exploitability and success measures that focus on reducing real exposure
rather than the number of findings generated.
Cyber firms face ‘verification crisis’ on real risk
Continuous Threat Exposure Management, commonly referred to as CTEM, has
become more widely adopted as a way to structure security work around an
organisation's exposure to attack. Even so, only 33% of organisations measure
whether exploitable risk is actually reduced over time, according to the
report. Instead, most programmes continue to track metrics focused on
discovery and volume, such as coverage gaps, asset counts and alert volume.
These measures can show rising activity and expanding scope, but they do not
necessarily show whether the organisation has reduced the likelihood of a
successful attack. "Security programs keep adding tools and expanding scope,
but outcomes aren't improving," said Rogier Fischer, CEO and co-founder of
Hadrian. ... According to the report, these vulnerabilities were not unknown.
They were identified and recorded, but competed for attention as security
teams dealt with new alerts, new tickets and the ongoing output of multiple
tools. In organisations with complex technology estates, this can create a
persistent backlog in which older issues remain unresolved while new potential
risks continue to surface. "Security teams can move fast, but too many tools
and unverified alerts make it difficult to maintain focus on what actually
matters," Fischer said. The report calls for earlier validation of
exploitability and success measures that focus on reducing real exposure
rather than the number of findings generated.Trust and Compliance in the Age of AI: Navigating the Risks of Intelligent Software Development
One of the most pressing challenges is trust in AI-generated outputs: Many teams report minimal productivity gains despite operational deployment, citing issues such as hallucinated code, misleading suggestions, and a lack of explainability. This trust gap is amplified by the opaque nature of many AI systems; developers often report struggling to understand how models arrive at decisions, making it difficult for them to validate outputs or debug errors. This lack of transparency, known as black box AI, puts teams at risk of accepting flawed code or test cases, potentially introducing vulnerabilities or performance regressions. ... AI's reliance on data introduces significant compliance risks, especially when proprietary documentation or sensitive datasets are used to train models. Continuing to conduct business the old-fashioned way is not the answer because traditional compliance frameworks often lag behind AI innovation, and governance models built for deterministic systems struggle with probabilistic outputs and autonomous decision-making. ... Another risk with potentially serious consequences: AI-generated code often lacks context. It may not align with architectural patterns, business rules, or compliance requirements, and without rigorous review, these changes can degrade system integrity and increase technical debt. It also must be noted that faster code generation does not equal better code. There is a risk of "bloated" or unsecure code being generated, requiring rigorous validation.The Cost of AI Slop in Lines of Code
Before we can get to the problem of excessive lines of code, we need to
understand how LLMs arrived at the generation of code with unnecessary lines.
The answer is in the training dataset and how that dataset was sourced from
publicly accessible places, including open repositories on Github and coding
websites. These sources lack any form of quality control, and therefore the
code the LLMs learned on is of varying quality. ... In the quest to get as
much training data as possible, there was little effort available to vet the
training data to ensure that it was good training data. The result LLMs
outputting the kind of code written by a first-year developer – and that
should be concerning to us. ... Some of the common vulnerabilities that we’ve
known about for decades, including cross-site scripting, SQL injection, and
log injection, are the kinds of vulnerabilities that AI introduces into the
code – and it generates this code at rates that are multiples of what even
junior developers produce. In a time when it’s important that we be more
cautious about security, AI can’t do it. ... Today, we have AI generating
bloated code that creates maintenance problems, and we’re looking the other
way. It can’t structure code to minimize code duplication. It doesn’t care
that there are two, three, four, or more implementations of basic operations
that could be made into one generic function. The code it was trained on
didn’t generate the abstractions to create the right functions, so it can’t
get there.
No comments:
Post a Comment