Quote for the day:
“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel
Microservice Integration Testing a Pain? Try Shadow Testing

Shadow testing is especially useful for microservices with frequent
deployments, helping services evolve without breaking dependencies. It
validates schema and API changes early, reducing risk before consumer impact.
It also assesses performance under real conditions and ensures proper
compatibility with third-party services. ... Shadow testing doesn’t replace
traditional testing but rather complements it by reducing reliance on fragile
integration tests. While unit tests remain essential for validating logic and
end-to-end tests catch high-level failures, shadow testing fills the gap of
real-world validation without disrupting users. Shadow testing follows a
common pattern regardless of environment and has been implemented by tools
like Diffy from Twitter/X, which introduced automated-response comparisons to
detect discrepancies effectively. ... The environment where shadow testing is
performed may vary, providing different benefits. More realistic environments
are obviously better:Staging shadow testing — Easier to set up, avoids
compliance and data isolation issues, and can use synthetic or anonymized
production traffic to validate changes safely. Production shadow testing —
Provides the most accurate validation using live traffic but requires
safeguards for data handling, compliance and test workload isolation.
The rising threat of shadow AI

Creating an Office of Responsible AI can play a vital role in a governance
model. This office should include representatives from IT, security, legal,
compliance, and human resources to ensure that all facets of the organization
have input in decision-making regarding AI tools. This collaborative approach
can help mitigate the risks associated with shadow AI applications. You want
to ensure that employees have secure and sanctioned tools. Don’t forbid
AI—teach people how to use it safely. Indeed, the “ban all tools” approach
never works; it lowers morale, causes turnover, and may even create legal or
HR issues. The call to action is clear: Cloud security administrators must
proactively address the shadow AI challenge. This involves auditing current AI
usage within the organization and continuously monitoring network traffic and
data flows for any signs of unauthorized tool deployment. Yes, we’re creating
AI cops. However, don’t think they get to run around and point fingers at
people or let your cloud providers point fingers at you. This is one of those
problems that can only be solved with a proactive education program aimed at
making employees more productive and not afraid of getting fired. Shadow AI is
yet another buzzword to track, but also it’s undeniably a growing problem for
cloud computing security administrators.
Can AI live up to its promise?

The debate about truly transformative AI may not be about whether it can think
or be conscious like a human, but rather about its ability to perform complex
tasks across different domains autonomously and effectively. It is important
to recognize that the value and usefulness of machines does not depend on
their ability to exactly replicate human thought and cognitive abilities, but
rather on their ability to achieve similar or better results through different
methods. Although the human brain has inspired much of the development of
contemporary AI, it need not be the definitive model for the design of
superior AI. Perhaps by freeing the development of AI from strict neural
emulation, researchers can explore novel architectures and approaches that
optimize different objectives, constraints, and capabilities, potentially
overcoming the limitations of human cognition in certain contexts. ... Some
human factors that could be stumbling blocks on the road to transformative AI
include: the information overload we receive, the possible misalignment with
our human values, the possible negative perception we may be acquiring, the
view of AI as our competitor, the excessive dependence on human experience,
the possible perception of futility of ethics in AI, the loss of trust,
overregulation, diluted efforts in research and application, the idea of human
obsolescence, or the possibility of an “AI-cracy”, for example.
The end of net neutrality: A wake-up call for a decentralized internet
We live in a time when the true ideals of a free and open internet are under
attack. The most recent repeal of net neutrality regulations is taking us
toward a more centralized, controlled version of the internet. In this
scenario, a decentralized, permissionless internet offers a powerful
alternative to today’s reality. Decentralized systems can address the threat
of censorship by distributing content across a network of nodes, ensuring that
no single entity can block or suppress information. Decentralized physical
infrastructure networks (DePIN) demonstrate how decentralized storage can keep
data accessible even when network parts are disrupted or taken offline. This
censorship resistance is crucial in regions where governments or corporations
try to limit free expression online. Decentralization can also cultivate
economic democracy by eliminating intermediaries like ISPs and related fees.
Blockchain-based platforms allow smaller, newer players to compete with
incumbent services and content companies on a level playing field. The Helium
network, for example, uses a decentralized model to challenge traditional
telecom monopolies with community-driven wireless infrastructure. In a
decentralized system, developers don’t need approval from ISPs to launch new
services.
Steering by insights: A C-Suite guide to make data work for everyone

With massive volumes of data to make sense of, having reliable and scalable
modern data architectures that can organise and store data in a structured,
secure, and governed manner while ensuring data reliability and integrity is
critical. This is especially true in the hybrid, multi-cloud environment in
which companies operate today. Furthermore, as we face a new “AI summer”,
executives are experiencing increased pressure to respond to the tsunami of hype
around AI and its promise to enhance efficiency and competitive differentiation.
This means companies will need to rely on high-quality, verifiable data to
implement AI-powered technologies Generative AI and Large Language Models (LLMs)
at an enterprise scale. ... Beyond infrastructure, companies in India need to
look at ways to create a culture of data. In today’s digital-first
organisations, many businesses require real-time analytics to operate
efficiently. To enable this, organisations need to create data platforms that
are easy to use and equipped with the latest tools and controls so that
employees at every level can get their hands on the right data to unlock
productivity, saving them valuable time for other strategic priorities. Building
a data culture also needs to come from the top; it is imperative to ensure that
data is valued and used strategically and consistently to drive decision-making.
The Hidden Cost of Compliance: When Regulations Weaken Security

What might be a bit surprising, however, is one particular pain point that
customers in this vertical bring up repeatedly. What is this mysterious pain
point? I’m not sure if it has an official name or not, but many people I meet
with share with me that they are spending so much time responding to regulatory
findings that they hardly have time for anything else. This is troubling to say
the least. It may be an uncomfortable discussion to have, but I’d argue that it
is long since past the time we as a security community have this discussion. ...
The threats enterprises face change and evolve quickly – even rapidly I might
say. Regulations often have trouble keeping up with the pace of that change.
This means that enterprises are often forced to solve last year’s or even last
decade’s problems, rather than the problems that might actually pose a far
greater threat to the enterprise. In my opinion, regulatory agencies need to
move more quickly to keep pace with the changing threat landscape. ...
Regulations are often produced by large, bureaucratic bodies that do not move
particularly quickly. This means that if some part of the regulation is
ineffective, overly burdensome, impractical, or otherwise needs adjusting, it
may take some time before this change happens. In the interim, enterprises have
no choice but to comply with something that the regulatory body has already
acknowledged needs adjusting.
Why the future of privileged access must include IoT – securing the unseen

The application of PAM to IoT devices brings unique complexities. The vast
variety of IoT devices, many of which have been operational for years, often
lack built-in security, user interfaces, or associated users. Unlike traditional
identity management, which revolves around human credentials, IoT devices rely
on keys and certificates, with each device undergoing a complex identity
lifecycle over its operational lifespan. Managing these identities across
thousands of devices is a resource-intensive task, exacerbated by constrained IT
budgets and staff shortages. ... Implementing a PAM solution for IoT involves
several steps. Before anything else, organisations need to achieve visibility of
their network. Many currently lack this crucial insight, making it difficult to
identify vulnerabilities or manage device access effectively. Once this
visibility is achieved, organisations must then identify and secure high-risk
privileged accounts to prevent them from becoming entry points for attackers.
Automated credential management is essential to replace manual password
processes, ensuring consistency and reducing oversight. Policies must be
enforced to authorise access based on pre-defined rules, guaranteeing secure
connections from the outset. Default credentials – a common exploit for
attackers – should be updated regularly, and automation can handle this
efficiently.
Understanding the AI Act and its compliance challenges
There is a clear tension between the transparency obligations imposed on
providers of certain AI systems under the AI Act and some of their rights and
business interests, such as the protection of trade secrets and intellectual
property. The EU legislator has expressly recognized this tension, as multiple
provisions of the AI Act state that transparency obligations are without
prejudice to intellectual property rights. For example, Article 53 of the AI
Act, which requires providers of general-purpose AI models to provide certain
information to organizations that wish to integrate the model downstream,
explicitly calls out the need to observe and protect intellectual property
rights and confidential business information or trade secrets. In practice, a
good faith effort from all parties will be required to find the appropriate
balance between the need for transparency to ensure safe, reliable and
trustworthy AI, while protecting the interests of providers that invest
significant resources in AI development. ... The AI Act imposes a number of
obligations on AI system vendors that will help in-house lawyers in carrying out
this diligence. Under Article 13 of the AI Act, vendors of high-risk AI systems
are, for example, required to provide sufficient information to (business)
deployers to allow them to understand the high-risk AI system’s operation and
interpret its output.
Why fast-learning robots are wearing Meta glasses

The technology acts as a sophisticated translator between human and robotic
movement. Using mathematical techniques called Gaussian normalization, the
system maps the rotations of a human wrist to the precise joint angles of a
robot arm, ensuring natural motions get converted into mechanical actions
without dangerous exaggerations. This movement translation works alongside a
shared visual understanding — both the human demonstrator’s smartglasses and the
robot’s cameras feed into the same artificial intelligence program, creating
common ground for interpreting objects and environments. ... The EgoMimic
researchers didn’t invent the concept of using consumer electronics to train
robots. One pioneer in the field, a former healthcare-robot researcher named Dr.
Sarah Zhang, has demonstrated 40% improvements in the speed of training
healthcare robots using smartphones and digital cameras; they enable nurses to
teach robots through gestures, voice commands, and real-time demonstrations
instead of complicated programming. This improved robot training is made
possible by AI that can learn from fewer examples. A nurse might show a robot
how to deliver medications twice, and the robot generalizes the task to handle
variations like avoiding obstacles or adjusting schedules.
Targeted by Ransomware, Middle East Banks Shore Up Security

The financial services industry in UAE — and the Middle East at large — sees
cyber wargaming as an important way to identify weaknesses and develop defenses
to the latest threats, Jamal Saleh, director general of the UAE Banks
Federation, said in a statement announcing the completion of the event. "The
rapid adoption and deployment of advanced technologies in the banking and
financial sector have increased risks related to transaction security and
digital infrastructure," he said in the statement, adding that the sector is
increasingly aware "of the importance of such initiatives to enhance
cybersecurity systems and ensure a secure and advanced environment for
customers, especially with the rapid developments in modern technology and the
rise of cybersecurity threats using advanced artificial intelligence (AI)
techniques." ... Ransomware remains a major threat to the financial industry,
but attackers have shifted from distributed denial-of-service (DDoS) attacks to
phishing, data breaches, and identity-focused attacks, according to Shilpi
Handa, associate research director for the Middle East, Turkey, and Africa at
business intelligence firm IDC. "We see trends such as increased investment in
identity and data security, the adoption of integrated security platforms, and a
focus on operational technology security in the finance sector," she says.
No comments:
Post a Comment