Quote for the day:
"Never stop trying; Never stop
believing; Never give up; Your day will come." -- Mandy Hale

"For defenders, these leaks are treasure troves," says Ensar Seker, chief
information security officer (CISO) at threat intelligence cybersecurity company
SOCRadar. "When analyzed correctly, they offer unprecedented visibility into
actor infrastructure, infection patterns, affiliate hierarchies, and even
monetization tactics." The data can help threat intel teams enrich indicators of
compromise (IoCs), map infrastructure faster, preempt attacks, and potentially
inform law enforcement disruption efforts, he says. "Organizations should track
these OpSec failures through their [cyber threat intelligence] programs," Seker
advises. "When contextualized correctly, they're not just passive observations;
they become active defensive levers, helping defenders move upstream in the kill
chain and apply pressure directly on adversarial capabilities." External leaks —
like the DanaBot leak — often ironically are rooted in the same causes that
threat actors abuse to break into victim networks: misconfigurations, unpatched
systems, and improper segmentation that can be exploited to gain unauthorized
access. Open directories, exposed credentials, unsecured management panels,
unencrypted APIs, and accidental data exposure via hosting providers are all
other opportunities for external discovery and exploration, Baker says.

The explosion of data across their multi-cloud, hybrid and on-premises
environments is creating a cause for concern among global CIOs, with 86 per cent
saying it is beyond the ability of humans to manage. Aware that the growing
complexity of their multi-provider cloud environments exposes their critical
data and puts their organisation’s business resilience at risk, these leaders
need to be confident they can restore their sensitive data at speed. They also
need certainty when it comes to rebuilding their cloud environment and
recovering their distributed cloud applications. To achieve these goals and
minimise the risk of contamination resulting from ransomware, CIOs need to
ensure their organisations implement a comprehensive cyber recovery plan that
prioritises the recovery of both clean data and applications and mitigates
downtime. ... Data recovery is just one aspect of cyber resilience for today’s
cloud-powered enterprises. Rebuilding applications is an often overlooked task
that can prove a time consuming and highly complex proposition when undertaken
manually. Having the capability to recover what matters the most quickly should
be a tried and tested component of every cloud-first strategy. Fortunately,
today’s advanced security platforms now feature automation and AI options that
can facilitate this process in hours or minutes rather than days or
weeks.

Establishing and building an effective relationship with your boss is one of the
most important hard skills in business. You need to consciously work with your
supervisor in order to get the best results for them, your organization, and
yourself. In my experience, your boss will appreciate you initiating a
conversation regarding what is important to them and how you can help them be
more successful. Some managers are good at communicating their expectations, but
some are not. It is your job to seek to understand what your boss’s expectations
are. ... You must start with the assumption that everyone reporting to you is
working in good faith toward the same goals. You need to demonstrate a trusting,
humble, and honest approach to doing business. As the boss, you need to be a
mentor, coach, visionary, cheerleader, confidant, guide, sage, trusted partner,
and perspective keeper. It also helps to have a sense of humor. It is first
vital to articulate the organization’s values, set expectations, and establish
mutual accountability. Then you can focus on creating a safe work ecosystem. ...
You’ll begin to change the culture by establishing the values of the
organization. This is an important step to ensure that everyone is on the same
page and working toward the same goals. Then, you’ll need to make sure they
understand what is expected of them.
BYOC allows customers to run SaaS applications using their own cloud
infrastructure and resources rather than relying on a third-party vendor’s
infrastructure. This framework transforms how enterprises consume cloud services
by inverting the traditional vendor-customer relationship. Rather than exporting
sensitive information to vendor-controlled environments, organizations maintain
data custody while still receiving fully-managed services. This approach
addresses a fundamental challenge in modern enterprise architecture: how to
maintain operational efficiency while also ensuring complete data control and
regulatory compliance. ... BYOC adoption is driven primarily by increasing
regulatory complexity around data sovereignty. The article Cloud Computing
Trends in 2025 notes that “data sovereignty concerns, particularly the location
and legal jurisdiction of data storage, are prompting cloud providers to invest
in localized data centers.” Organizations must navigate an increasingly
fragmented regulatory landscape while maintaining operational consistency. And
when regulations vary country by country, having data in multiple third-party
networks can dramatically compound the problem of knowing which data is subject
to a specific country’s regulations.

“There’s a lot of anxiety about AI and software creation in general, not
necessarily just frontend or backend, but people are rightfully trying to
understand what does this mean for my career,” Robinson told The New Stack. “If
the current rate of improvement continues, what will that look like in 1, 2, 3,
4 years? It could be pretty significant. So it has a lot of people stepping back
and evaluating what’s important to them in their career, where they want to
focus.” Armando Franco also sees anxiety around AI. Franco is the director
of technology modernization at TEKsystems Global Services, which employs more
than 3,000 people. It’s part of TEKsystems, a large global IT services
management firm that employs 80,000 IT professionals. ... This isn’t the first
time in history that people have fretted about new technologies, pointed out
Shreeya Deshpande, a senior analyst specializing in data and AI with the Everest
Group, a global research firm. “Fears that AI will replace developers mirror
historical anxieties seen during past technology shifts — and, as history shows,
these fears are often misplaced,” Deshpande said in a written response to The
New Stack. “AI will increasingly automate repetitive development tasks, but the
developer’s role will evolve rather than disappear — shifting toward activities
like AI orchestration, system-level thinking, and embedding governance and
security frameworks into AI-driven workflows.”
Applications generated by no-code platforms are first and foremost applications.
Therefore, their exploitability is first and foremost attributed to
vulnerabilities introduced by their developers. To make things worse for no-code
applications, they are also jeopardized by misconfigurations of the development
and deployment environments. ... Most platforms provide controls at various
levels that allow white-listing / blacklisting of connectors. This makes it
possible to put guardrails around the use of “standard integrations”. Keeping
tabs of these lists in a dynamic environment with a large number of developers
is a big challenge. Shadow APIs are even more difficult to track and manage,
particularly when some of the endpoints are determined only in runtime. Most
platforms do not provide granular control over the use of shadow APIs but do
provide a kill switch for their use entirely. Mechanisms exist in all the
platforms that allow for secure development of applications and automations if
used correctly. These mechanisms that can help prevent injection
vulnerabilities, traversal vulnerabilities and other types of mistakes have
different levels of complexity in terms of their use by developers. Unvetted
data egress is also a big problem in these environments just as it is in general
enterprise environments.

Cloud-based microservices provide many benefits to financial institutions across
operational efficiency, security, and technology modernization. Economically,
these architectures enable faster transaction processing by reducing latency and
optimizing resource allocation. They also lower infrastructure expenses by
replacing monolithic legacy systems with modular, scalable services that are
easier to maintain and operate. Furthermore, the shift to cloud technologies
increases demand for specialized roles in cloud operations and cybersecurity. In
security operations, microservices support zero-trust architectures and data
encryption to reduce the risk of fraud and unauthorized access. Cloud platforms
also enhance resilience by offering built-in redundancy and disaster recovery
capabilities, which help ensure continuous service and maintain data integrity
in the event of outages or cyber incidents. ... To build secure and scalable
financial microservices, there are a few key technology stacks needed. They
include Docker and Kubernetes containerization for managing multiple
microservices, and cloud functions for serverless computing, which will be used
to run calculations on demand. API Gateways will ensure that there is secure
communication between services and
Kafka for real-time data
monitoring and streaming.

Among the key areas of innovation in the UEC 1.0 specification is a new
mechanism for network congestion control, which is critical for AI workloads.
Metz explained that the UEC’s approach to congestion control does not rely on a
lossless network as has traditionally been the case. It also introduces a new
mode of operation where the receiver is able to limit sender transmissions as
opposed to being passive. “This is critical for AI workloads as these primitives
enable the construction of larger networks with better efficiency,” he said.
“It’s a crucial element of reducing training and inference time.” ... Metz said
that four workgroups got started after the main 1.0 work began, each with their
own initiatives that solidify and simplify deploying UEC. These workgroups
include: storage, management, compliance and performance. He noted that all of
these workgroups have projects that are being developed to strengthen the
ease-of-use, efficiency improvements in the next stages and simplified
provisioning. UEC is also working on educational materials to help inform
networking administrators on UEC technology and concepts. The group is also
working industry ecosystem partners. “We have projects with OCP, NVM Express,
SNIA, and more – with many more on the way to work on each layer – from the
physical to the software,” Metz said.

The key difference is that traditional models were built as generic tools
designed to perform a wide range of tasks. On the other hand, AI agents are
designed to meet businesses’ specific needs. They can train a single agent or a
group of them on their data to handle tasks unique to your business. This
translates to better outcomes, improved performance, and stronger business
impact. Another huge advantage of using AI agents is that they help unify
marketing efforts and create a cohesive marketing ecosystem. Another major shift
that comes with AI agent implementation is something called AIAO- AI Agent
Optimisation. This is highly likely to become the next big alternative to
traditional SEO. Now, marketers optimise content around specific keywords like
“best project management software.” But with AIAO, that’s changing. AI agents
are built to understand and respond to much more complex, conversational
queries, like “What’s the best project management tool with timeline boards that
works for marketing teams?” It’s no longer about integrating the right phrases
into your content. It’s about ensuring your information is relevant, clear, and
easy for AI agents to understand and process. Semantic search is going to take
the lead.
Let’s be clear, the network isn’t an accessory; it’s the key ingredient that
determines how well your cloud performs, how secure your data is, how quickly
you can recover from a disaster, and how easily you scale across borders or
platforms. Think of it as the highway system beneath your business. Sleek, fast
roads make for a smooth ride, while congested or patchy ones will leave you
stuck in traffic. ... It’s tempting to get caught up in the flashier parts of
cloud infrastructure, like server specs and cutting-edge tools, but none of it
works well without a strong network underneath. Here’s the truth. Your network
is doing the quiet, behind-the-scenes heavy lifting. It’s what keeps your games
lag-free, your financial systems always on, and your hybrid workloads running
smoothly across platforms even if it doesn’t get all the attention. You should
think of your network as the glue that holds it all together – from your cloud
services to your bare metal setup. It is what makes it all possible for AI
models to work seamlessly across regions for backups to run smoothly in the
background and for your users to enjoy fast, always-on experiences without ever
thinking about what’s happening behind the scenes. ... A reliable, secure and
performant network is nothing if it can’t be managed the right way. Having the
right architecture, tools and knowledge to support it, is key for success.
No comments:
Post a Comment