Quote for the day:
"Next generation leaders are those who
would rather challenge what needs to change and pay the price than remain
silent and die on the inside." -- Andy Stanley

“The biggest challenge is fragmentation; most enterprises operate across
multiple cloud environments, each with its own security model, making unified
governance incredibly complex,” Dipankar Sengupta, CEO of Digital Engineering
Services at Sutherland Global told InfoWorld. ... Shadow IT is also a
persistent threat and challenge. According to Sengupta, some enterprises
discover nearly 40% of their data exists outside governed environments.
Proactively discovering and onboarding those data sources has become
non-negotiable. ... A data fabric deepens organizations’ understanding and
control of their data and consumption patterns. “With this deeper
understanding, organizations can easily detect sensitive data and workloads in
potential violation of GDPR, CCPA, HIPAA and similar regulations,” Calvesbert
commented. “With deeper control, organizations can then apply the necessary
data governance and security measures in near real time to remain compliant.”
... Data security and governance inside a data fabric shouldn’t just be about
controlling access to data, it should also come with some form of data
validation. The cliched saying “garbage-in, garbage-out” is all too true when
it comes to data. After all, what’s the point of ensuring security and
governance on data that isn’t valid in the first place?

While AI can boost productivity by handling routine tasks, it can’t replace
the strategic roles filled by skilled professionals, Vianello said. To avoid
those kinds of issues, agencies — just like companies — need to invest in
adaptable, mission-ready teams with continuously updated skills in cloud,
cyber, and AI. The technology, he said, should augment – not replace — human
teams, automating repetitive tasks while enhancing strategic work. Success in
high-demand tech careers starts with in-demand certifications, real-world
experience, and soft skills. Ultimately, high-performing teams are built
through agile, continuous training that evolves with the tech, Vianello said.
“We train teams to use AI platforms like Copilot, Claude and ChatGPT to
accelerate productivity,” Vianello said. “But we don’t stop at tools; we build
‘human-in-the-loop’ systems where AI augments decision-making and humans
maintain oversight. That’s how you scale trust, performance, and ethics in
parallel.” High-performing teams aren’t born with AI expertise; they’re built
through continuous, role-specific, forward-looking education, he said, adding
that preparing a workforce for AI is not about “chasing” the next hottest
skill. “It’s about building a training engine that adapts as fast as
technology evolves,” he said.

Those built-in utilities might have been good enough for an earlier era, but
they aren't good enough for our complex, multi-platform world. For most people,
the correct option is to switch to a third-party password manager and shut down
all those built-in password features in the browsers and mobile devices you use.
Why? Third-party password managers are built to work everywhere, with a full set
of features that are the same (or nearly so) across every device. After you make
that switch, the passwords you saved previously are left behind in a cloud
service you no longer use. If you regularly switch between browsers (Chrome on
your Mac or Windows PC, Safari on your iPhone), you might even have multiple
sets of saved passwords scattered across multiple clouds. It's time to clean up
that mess. If you're no longer using a password manager, it's prudent to track
down those outdated saved passwords and delete them from the cloud. I've studied
each of the four leading browsers: Google Chrome, Apple's Safari, Microsoft
Edge, and Mozilla Firefox. Here's how to find the password management settings
for each one, export any saved passwords to a safe place, and then turn off the
feature. As a final step, I explain how to purge saved passwords and stop
syncing.

Given that GenAI technology hit the mainstream with GPT 4 two years ago, Reed
says: “It was like nothing ever before.” And while the word “transformational”
tends to be generously overused in technology he describes generative AI as
“transformational with a capital T.” But transformations are not instant and
businesses need to understand how to apply GenAI most effectively, and figure
out where it does and does not work well. “Every time you hear anything with
generative AI, you hear the word journey and we're no different,” he says. “We
are trying to understand it. We're trying to understand its capabilities and
understand our place with generative AI,” Reed adds. Early adopters are keen to
understand how to use GenAI in day-to-day work, which, he says, can range from
being an AI-based work assistant or a tool that changes the way people search
for information to using AI as a gateway to the heavy lifting required in many
organisations. He points out that bet365 is no different. “We have a sliding
scale of ambition, but obviously like anything we do in an organisation of this
size, it must be measured, it must be understood and we do need to be very, very
clear what we're using generative AI for.” One of the very clear use cases for
GenAI is in software development.

Because of the inherent scalability of cloud resources, the cloud makes a lot of
sense when the compute, storage, and other resources your business needs
fluctuate constantly in volume. But if you find that your resource consumption
is virtually unchanged from month to month or year to year, you may not need the
cloud. You may be able to spend less and enjoy more control by deploying on-prem
infrastructure. ... Cloud costs will naturally fluctuate over time due to
changes in resource consumption levels. It's normal if cost increases correlate
with usage increases. What's concerning, however, is a spike in cloud costs that
you can't tie to consumption changes. It's likely in that case that you're
spending more either because your cloud service provider raised its prices or
your cloud environment is not optimized from a cost perspective. ... You can
reduce latency (meaning the delay between when a user requests data on the
network and when it arrives) on cloud platforms by choosing cloud regions that
are geographically proximate to your end users. But that only works if your
users are concentrated in certain areas, and if cloud data centers are available
close to them. If this is not the case, you are likely to run into latency
issues, which could dampen the user experience you deliver.

The optical-to-electrical conversion that is performed by the optical
transceiver is still needed in a CPO system, but it moves from a pluggable
module located at the faceplate of the switching equipment to a small chip (or
chiplet) that is co-packaged very closely to the target ICs inside the box. Data
center chipset heavyweights Broadcom and Nvidia have both announced CPO-based
data center networking products operating at 51.2 and 102.4 Tb/s. ... Early
generation CPO systems, such as those announced by Broadcom and Nvidia for
Ethernet switching, make use of high channel count fiber array units (FAUs) that
are designed to precisely align the fiber cores to their corresponding
waveguides inside the PICs. These FAUs are challenging to make as they require
high fiber counts, mixed single-mode (SM) and polarization maintaining (PM)
fibers, integration of micro-optic components depending on the fiber-to-chip
coupling mechanism, highly precise tolerance alignments, CPO-optimized fibers
and multiple connector assemblies. ... In addition to scale and cost
benefits, extreme densities can be achieved at the edge of the PIC by bringing
the waveguides very close together, down to about 30µm, which is far more than
what can be achieved with even the thinnest fibers. Next generation
fiber-to-chip coupling will enable GPU optics – which will require unprecedented
levels of density and scale.

Unlocking AI’s full business potential requires building executive AI literacy.
They must be educated on AI opportunities, risks and costs to make effective,
future-ready decisions on AI investments that accelerate organisational
outcomes. Gartner recommends D&A leaders introduce experiential upskilling
programs for executives, such as developing domain-specific prototypes to make
AI tangible. This will lead to greater and more appropriate investment in AI
capabilities. ... Using synthetic data to train AI models is now a critical
strategy for enhancing privacy and generating diverse datasets. However,
complexities arise from the need to ensure synthetic data accurately represents
real-world scenarios, scales effectively to meet growing data demand and
integrates seamlessly with existing data pipelines and systems. “To manage these
risks, organisations need effective metadata management,” said Idoine. “Metadata
provides the context, lineage and governance needed to track, verify and manage
synthetic data responsibly, which is essential to maintaining AI accuracy and
meeting compliance standards.” ... Building GenAI models in-house offers
flexibility, control and long-term value that many packaged tools cannot match.
As internal capabilities grow, Gartner recommends organisations adopt a clear
framework for build versus buy decisions.
A microservice is one of those where it is independently deployable so I can
make a change to it and I can roll out new versions of it without having to
change any other part of my system. So things like avoiding shared databases are
really about achieving that independent deployability. And it's a really simple
idea that can be quite easy to implement if you know about it from the
beginning. It can be difficult to implement if you're already in a tangled mess.
And that idea of independent deployability has interesting benefits because the
fact that something is independently deployable is obviously useful because it's
low impact releases, but there's loads of other benefits that start to flow from
that. ... The vast majority of people who tell me they've scaling issues often
don't have them. They could solve their scaling issues with a monolith, no
problem at all, and it would be a more straightforward solution. They're
typically organizational scale issues. And so, for me, what the world needs from
our IT's product-focused, outcome-oriented, and more autonomous teams. That's
what we need, and microservices are an enabler for that. Having things like team
topologies, which of course, although the DevOps topology stuff was happening
around the time of my first edition of my book, that being kind of moved into
the team topology space by Matthew and Manuel around the second edition again
sort of helps kind of crystallize a lot of those concepts as well.

Adopting a connected GRC solution enables organizations to move beyond siloed
operations by bringing risk and compliance functions onto a single, integrated
platform. It also creates a unified view of risks and controls across
departments, bringing better workflows and encouraging collaboration. With
centralized data and shared visibility, managing complex, interconnected risks
becomes far more efficient and proactive. In fact, this shift toward integration
reflects a broader trend that is seen in the India Regulatory Technology
Business Report 2024–2029 findings, which highlight the growing adoption of
compliance automation, AI, and machine learning in the Indian market. The report
points to a future where GRC is driven by data, merging operations, technology,
and control into a single, intelligent framework. ... An AI-first, connected GRC
solution takes the heavy lifting out of compliance. Instead of juggling
disconnected systems and endless updates, it brings everything together, from
tracking regulations to automating actions to keeping teams aligned. For
compliance teams, that means less manual work and more time to focus on what
matters. ... A smart, integrated GRC solution brings everything into one place.
It helps organizations run more smoothly by reducing errors and simplifying
teamwork. It also means less time spent on admin and better use of people and
resources where they are really needed.

Information sharing among different sectors predominantly revolves around
threats related to phishing, vulnerabilities, ransomware, and data breaches.
Each sector tailors its approach to cybersecurity information sharing based on
regulatory and technological needs, carefully considering strategies that
address specific risks and identify resolution requirements. However, for the
mobile industry, information sharing relating to cyberattacks on the networks
themselves and misuse of interconnection signalling are also the focus of
significant sharing efforts. Industries learn from each other by adopting
sector-specific frameworks and leveraging real-time data to enhance their
cybersecurity posture. This includes real-time sharing of indicators of
compromise (IoCs) and the techniques, tactics, and procedures (TTPs) associated
with phishing campaigns. An example of this is the recently launched Stop Scams
UK initiative, bringing together tech, telecoms and finance industry leaders,
who are going to share real-time data on fraud indicators to enhance consumer
protection and foster economic security. This is an important development, as
without cross-industry information sharing, determining whether a cybersecurity
attack campaign is sector-specific or indiscriminate becomes difficult.
No comments:
Post a Comment