Quote for the day:
"It is during our darkest moments that we must focus to see the light." -- Aristotle Onassis
Why agentic AI is the next wave of innovation

AI agents have become integral to modern enterprises, not just enhancing
productivity and efficiency, but unlocking new levels of value through
intelligent decision-making and personalized experiences. The latest trends
indicate a significant shift towards proactive AI agents that anticipate user
needs and act autonomously. These agents are increasingly equipped with
hyper-personalization capabilities, tailoring interactions based on individual
preferences and behaviors. ... According to NVIDIA, when Azure AI Agent
Service is paired with NVIDIA AgentIQ, an open-source toolkit, developers can
now profile and optimize teams of AI agents in real time to reduce latency,
improve accuracy, and drive down compute costs. ... “The launch of NVIDIA NIM
microservices in Azure AI Foundry offers a secure and efficient way for Epic
to deploy open-source generative AI models that improve patient care, boost
clinician and operational efficiency, and uncover new insights to drive
medical innovation,” says Drew McCombs, vice president, cloud and analytics at
Epic. “In collaboration with UW Health and UC San Diego Health, we’re also
researching methods to evaluate clinical summaries with these advanced models.
Together, we’re using the latest AI technology in ways that truly improve the
lives of clinicians and patients.”
Businesses intensify efforts to secure data in cloud computing

Building a robust security strategy begins with understanding the delineation
between the customer's and the provider's responsibilities. Customers are
typically charged with securing network controls, identity and access
management, data, and applications within the cloud, while the CSP maintains
the core infrastructure. The specifics of these responsibilities depend on the
service model and provider in question. The importance of effective cloud
security has grown as more organisations shift away from traditional
on-premises infrastructure. This shift brings new regulatory expectations
relating to data governance and compliance. Hybrid and multicloud environments
offer businesses unprecedented flexibility, but also introduce complexity,
increasing the challenge of preventing unauthorised access. ... Attackers are
adjusting their tactics accordingly, viewing cloud environments as potentially
vulnerable targets. A well-considered cloud security plan is regarded as
essential for reducing breaches or damage, improving compliance, and enhancing
customer trust, even if it cannot eliminate all risks. According to the
statement, "A well-thought-out cloud security plan can significantly reduce
the likelihood of breaches or damage, enhance compliance, and increase
customer trust—even though it can never completely prevent attacks and
vulnerabilities."
Safeguarding the Foundations of Enterprise GenAI

Implementing strong identity security measures is essential to mitigate risks
and protect the integrity of GenAI applications. Many identities have high
levels of access to critical infrastructure and, if compromised, could provide
attackers with multiple entry points. It is important to emphasise that
privileged users include not just IT and cloud teams but also business users,
data scientists, developers and DevOps engineers. A compromised developer
identity, for instance, could grant access to sensitive code, cloud functions,
and enterprise data. Additionally, the GenAI backbone relies heavily on
machine identities to manage resources and enforce security. As machine
identities often outnumber human ones, securing them is crucial. Adopting a
Zero Trust approach is vital, extending security controls beyond basic
authentication and role-based access to minimise potential attack surfaces. To
enhance identity security across all types of identities, several key controls
should be implemented. Enforcing strong adaptive multi-factor authentication
(MFA) for all user access is essential to prevent unauthorised entry. Securing
access to credentials, keys, certificates, and secrets—whether used by humans,
backend applications, or scripts—requires auditing their use, rotating them
regularly, and ensuring that API keys or tokens that cannot be automatically
rotated are not permanently assigned.
The new frontier of API governance: Ensuring alignment, security, and efficiency through decentralization

To effectively govern APIs in a decentralized landscape, organizations must
embrace new principles that foster collaboration, flexibility and shared
responsibility. Optimized API governance is not about abandoning control,
rather about distributing it strategically while still maintaining overarching
standards and ensuring critical aspects such as security, compliance and
quality. This includes granting development teams with autonomy to design,
develop and manage their APIs within clearly defined boundaries and
guidelines. This encourages innovation while fostering ownership and allows
each team to optimize their APIs to their specific needs. This can be further
established by a shared responsibility model amongst teams where they are
accountable for adhering to governance policies while a central governing body
provides the overarching framework, guidelines and support. This operating
model can be further supported by cultivating a culture of collaboration and
communication between central governance teams and development teams. The
central government team can have a representative from each development team
and have clear channels for feedback, shared documentation and joint
problem-solving scenarios. Implementing governance policies as code,
leveraging tools and automation make it easier to enforce standards
consistently and efficiently across the decentralized environment.
Banking on innovation: Engineering excellence in regulated financial services
While financial services regulations aren’t likely to get simpler, banks are
finding ways to innovate without compromising security. "We’re seeing a
culture change with our security office and regulators," explains Lanham. "As
cloud tech, AI, and LLMs arrive, our engineers and security colleagues have to
upskill." Gartner's 2025 predictions say GenAI is shifting data security
to protect unstructured data. Rather than cybersecurity taking a gatekeeper
role, security by design is built into development processes. "Instead of
saying “no”, the culture is, how can we be more confident in saying “yes”?"
notes Lanham. "We're seeing a big change in our security posture, while
keeping our customers' safety at the forefront." As financial organizations
carefully tread a path through digital and AI transformation, the most
successful will balance innovation with compliance, speed with security, and
standardization with flexibility. Engineering excellence in financial services
needs leaders who can set a clear vision while balancing tech potential with
regulations. The path won’t be simple, but by investing in simplification,
standardization and a shared knowledge and security culture, financial
services engineering teams can drive positive change for millions of banking
customers.
‘Data security has become a trust issue, not just a tech issue’
Data is very messy and data ecosystems are very complex. Every organisation we
speak to has data across multiple different types of databases and data stores
for different use cases. As an industry, we need to acknowledge the fact that no
organisation has an entirely homogeneous data stack, so we need to support and
plug into a wide variety of data ecosystems, like Databricks, Google and Amazon,
regardless of the tooling used for data analytics, for integration, for quality,
for observability, for lineage and the like. ... Cloud adoption is causing
organisations to rethink their traditional approach to data. Most use cloud data
services to provide a shortcut to seamless data integration, efficient
orchestration, accelerated data quality and effective governance. In reality,
most organisations will need to adopt a hybrid approach to address their entire
data landscape, which typically spans a wide variety of sources that span both
cloud and on premises. ... Data security has become a trust issue, not just a
tech issue. With AI, hybrid cloud and complex supply chains, the attack surface
is massive. We need to design with security in mind from day one – think secure
coding, data-level controls and zero-trust principles. For AI, governance is
critical, and it too needs to be designed in and not an afterthought. That means
tracking where data comes from, how models are trained, and ensuring
transparency and fairness.
Secure by Design vs. DevSecOps: Same Security Goal, Different Paths

Although the "secure by design" initiative offers limited guidance on how to
make an application secure by default, it comes closer to being a distinct set
of practices than DevSecOps. The latter is more of a high-level philosophy that
organizations must interpret on their own; in contrast, secure by design
advocates specific practices, such as selecting software architectures that
mitigate the risk of data leakage and avoiding memory management practices that
increase the chances of the execution of malicious code by attackers. ...
Whereas DevSecOps focuses on all stages of the software development life cycle,
the secure by design concept is geared mainly toward software design. It deals
less with securing applications during and after deployment. Perhaps this makes
sense because so long as you start with a secure design, you need to worry less
about risks once your application is fully developed — although given that
there's no way to guarantee an app can't be hacked, DevSecOps' holistic approach
to security is arguably the more responsible one. ... Even if you conclude
that secure by design and DevSecOps mean basically the same thing, one notable
difference is that the government sector has largely driven the secure by design
initiative, while DevSecOps is more popular within private industry.
Immutable by Design: Reinventing Business Continuity and Disaster Recovery
Immutable backups create tamper-proof copies of data, protecting it from cyber
threats, accidental deletion, and corruption. This guarantees that critical data
can be quickly restored, allowing businesses to recover swiftly from
disruptions. Immutable storage provides data copies that cannot be manipulated
or altered, ensuring data remains secure and can quickly be recovered from an
attack. In addition to immutable backup storage, response plans must be
continually tested and updated to combat the evolving threat landscape and adapt
to growing business needs. The ultimate test of a response plan ensures data can
be quickly and easily restored or failed over, depending on the event.
Activating a second site in the case of a natural disaster or recovering systems
without making any ransomware payments in the case of an attack. This testing
involves validating the reliability of backup systems, recovery procedures, and
the overall disaster recovery plan to minimize downtime and ensure business
continuity. ... It can be challenging for IT teams trying to determine the
perfect fit for their ecosystem, as many storage vendors claim to provide
immutable storage but are missing key features. As a rule of thumb, if
"immutable" data can be overwritten by a backup or storage admin, a vendor, or
an attacker, then it is not a truly immutable storage solution.
Neurohacks to outsmart stress and make better cybersecurity decisions

In cybersecurity where clarity and composure are essential, particularly during
a data breach or threat response, these changes can have high-stakes
consequences. “The longer your brain is stuck in this high-stress state, the
more of those changes you will start to see and burnout is just an extreme case
of chronic stress on the brain,” Landowski says. According to her, the tipping
point between healthy stress and damaging chronic stress usually comes after
about eight to 12 weeks, but it varies between individuals. “If you know about
some of the things you can do to reduce the impact of stress on your body, you
can potentially last a lot longer before you see any effects, whereas if you’re
less resilient, or if your genes are more susceptible to stress, then it could
be less.” ... working in cybersecurity, particularly as a hacker, is often about
understanding how people think and then spotting the gaps. That same shift in
understanding — tuning into how the brain works under different conditions — can
help cybersecurity leaders make better decisions and build more resilient teams.
As Cerf highlights, he works with organizations to identify these optimal
operating states, testing how individuals and entire teams respond to stress and
when their brains are most effective. “The brain is not just a solid thing,”
Cerf says.
Beyond Safe Models: Why AI Governance Must Tackle Unsafe Ecosystems
Despite the evident risks of unsafe deployment ecosystems, the prevailing
approach to AI governance still heavily emphasizes pre-deployment
interventions—such as alignment research, interpretability tools, and red
teaming—aimed at ensuring that the model itself is technically sound. Governance
initiatives like the EU AI Act, while vital, primarily place obligations on
providers and developers to ensure compliance through documentation,
transparency, and risk management plans. However, the governance of what happens
after deployment when these models enter institutions with their own incentives,
infrastructures, and oversight receives comparatively less attention. For
example, while the EU AI Act introduces post-market monitoring and deployer
obligations for high-risk AI systems, these provisions remain limited in scope.
Monitoring primarily focuses on technical compliance and performance, with
little attention to broader institutional, social, or systemic impacts. Deployer
responsibilities are only weakly integrated into ongoing risk governance and
focus primarily on procedural requirements—such as record-keeping and ensuring
human oversight—rather than assessing whether the deploying institution has the
capacity, incentives, or safeguards to use the system responsibly.