Advanced Microsegmentation Strategies for IT Leaders
Microsegmentation, and network segmentation in general, is a 50-year-old
cybersecurity strategy that “involves dividing a network into smaller zones to
enhance security by restricting the movement of a threat to an isolated segment
rather than to the whole network,” says Guy Pearce, a member of the ISACA
Emerging Trends Working Group. ... Moyle says that any segmentation (micro or
otherwise) can be “part of a security strategy based on use case, architecture
and other factors.” He notes that microsegmentation itself isn’t an end goal for
security, and that IT leaders should instead see it as “a mechanism that’s part
of a broader holistic strategy.” That said, many factors go into a successful
microsegmentation implementation, namely careful planning. Microsegmentation
goes hand in hand with setting up granular security policies. It also relies on
continuous monitoring, evaluation and user education awareness, Pearce says.
Successful microsegmentation also requires automation, incident response
orchestration and cross-team collaboration. None of that is sustainable without
a solid, well-maintained network architecture map.
Could DC win the new data center War of the Currents?
Fundamentally, electronics use DC power. The chips and circuit boards are all
powered by direct current, and every computer or other piece of IT equipment
that is plugged into the AC mains has to have a “power supply unit” (PSU), also
known as a rectifier or switched mode power supply (SMPS) inside the box,
turning the power from AC to DC. ... Data centers have an Uninterruptible Power
Supply (UPS) designed to power the facility for long enough for generators to
fire up. The UPS has to have a large store of batteries, and they are powered by
DC. So power enters the data center as AC, is converted to DC to charge the
batteries, and then back to AC for distribution to the racks. ... Data centers
are now looking at using microgrids for power. That means drawing on-site energy
directly from sources such as fuel cells and solar panels. As it turns out,
those sources often conveniently produce direct current. A data center could be
isolated from the AC grid, and live on its own microgrid. On that grid DC power
sources charge batteries, and power electronics which fundamentally run on DC.
In that situation, the idea of switching to AC for a short loop around the
facility begins to look, well, odd.
5 key metrics for IT success
When merged, speed, quality, and value metrics are essential for any
organization undergoing transformation and looking to move away from traditional
project management approaches, says Sheldon Monteiro, chief product officer at
digital consulting firm Publicis Sapient. “This metric isn’t limited to a
specific role or level within an IT organization,” he explains. “It’s relevant
for everyone involved in the product development process.” Speed, quality, and
value metrics represent a shift from traditional project management metrics
focused on time, scope, and cost. “Speed ensures the ability to respond swiftly
to change, quality guarantees that changes are made without compromising the
integrity of systems, and value ensures that the changes contribute meaningfully
to both customers and the business,” Monteiro says. “This holistic approach
aligns IT practices with the demands of a continuously evolving landscape.”
Focusing on speed, quality, and value provides a more nuanced understanding of
an organization’s adaptability and effectiveness. “Focusing on speed, quality,
and value provides insights into an organization’s ability to adapt to
continuous change,” Monteiro says.
The future of cybersecurity: Anticipating changes with data analytics and automation
In recent years, cybersecurity threats have undergone a notable evolution,
marked by the subtler tactics of mature threat actors who now leave fewer
artifacts for analysis. The old metaphor ‘looking for a needle in a haystack’
(to describe the detection of malicious activity) is now more akin to ‘looking
for a needle in a stack of needles.’ This shift necessitates the establishment
of additional context around suspicious events to effectively differentiate
legitimate from illegitimate activities. Automation emerges as a pivotal element
in providing this contextual enrichment, ensuring that analysts can discern
relevant circumstances amid the rapid and expansive landscape of modern
enterprises. The landscape of cyber threats continues to further evolve, and
recent high-profile data breaches underscore the gravity of the shift. In
response to these challenges, data analytics and automation play a crucial role
in detecting lateral movement, privilege escalation, and exfiltration,
particularly when threat actors exploit zero-day vulnerabilities to gain entry
into an environment.
Significance of protecting enterprise data
In a world where data fuels innovation and growth, protecting enterprise data is
not optional; it’s essential. The digital age has ushered in a complex threat
landscape, necessitating a multifaceted approach to data protection. From
next-gen SOCs and application security to IAM, data privacy, and collaboration
with SaaS providers, every aspect plays a vital role. As traditional security
tools and firewalls are no longer sufficient to detect and respond to modern
threats, next-generation security operations centres (SOCs) can play a proactive
role by leveraging technologies like AI, machine learning, and user behavior
analytics. They can analyse huge volumes of data in real-time to detect even the
most well-hidden attacks. Early detection and quick response are crucial to
minimise damage from security incidents. Next-gen SOCs play a pivotal role in
safeguarding enterprises by enhancing visibility, shortening response times, and
reducing security risks. Protecting applications is equally important, as in the
digital age, applications are the conduit through which data flows. Many
successful breaches target exploitable vulnerabilities residing in the
application layer, indicating the need for enterprise IT departments to be extra
vigilant about application security.
A changing world requires CISOs to rethink cyber preparedness
A cybersecurity posture that is societally conscious equally requires adopting
certain underlying assumptions and taking preparatory actions. Foremost among
these is the recognition that neutrality and complacency are anathema to one
another in the context of digital threats stemming from geopolitical tension. As
I recently wrote, the inherent complexity and significance of norm politicking
in international affairs leads to risk that impacts cybersecurity stakeholders
in nonlinear fashion. Recent conflicts support the idea that civilian hacking
around major geopolitical fault lines, for instance, operates on divergent
logics of operations depending on the phase of conflict that is underway. The
result of such conditions should not be a reluctance to make statements or take
actions that avoid geopolitical relevance. Rather, cybersecurity stakeholders
should clearly and actively attempt to delineate the way geopolitical threats
and developments reflect the security objectives of the organization and its
constituent community. They should do so in a way that is visible to that
community.
AI-powered 6G wireless promises big changes
According to Will Townsend, an analyst at Moor Insights & Strategy, things
are accelerating more quickly with 6G than 5G did at the same point in its
evolution. And speaking of speeds, that will also be one of the biggest and most
transformative improvements of 6G over 5G, due to the shift of 6G into the
terahertz spectrum range, Townsend says. “This will present challenges because
it’s such a high spectrum,” he says. “But you can do some pretty incredible
things with instantaneous connectivity. With terahertz, you’re going to get
near-instantaneous latency, no lag, no jitter. You’re going to be able to do
some sensory-type applications.” ... The new 6G spectrum also brings another
benefit – an ability to better sense the environment, says Spirent’s Douglas.
“The radio signal can be used as a sensing mechanism, like how sonar is used in
submarines,” he says. That can allow use cases that need three-dimensional
visibility and complete visualization of the surrounding environment. “You could
map out the environment – the shops, buildings, everything – and create a
holistic understanding of the surroundings and use that to build new types of
services for the market,” Douglas says.
What distinguishes data governance from information governance?
Data governance is primarily concerned with the proper management of data as a
strategic asset within an organization. It emphasizes the accuracy,
accessibility, security, and consistency of data to ensure that it can be
effectively used for decision-making and operations. On the other hand,
information governance encompasses a broader spectrum, dealing with all forms of
information, not just data. It includes the management of data privacy,
security, and compliance, as well as the handling of business processes related
to both digital and physical information. ... Implementing data governance
ensures that an organization's data is accurate, accessible, and secure, which
is vital for operational decision-making and strategic planning. This governance
type establishes the necessary protocols and standards for data quality and
usage. Information governance, by managing all forms of information, helps
organizations comply with legal and regulatory requirements, reduce risks, and
enhance business efficiency and effectiveness. It also addresses the management
of redundant, outdated, and trivial information, which can lead to cost savings
and improved organizational performance.
The Future Is AI, but AI Has a Software Delivery Problem
As more developers become comfortable building AI-powered software, Act Three
will trigger a new race: the ability to build, deploy and manage AI-powered
software at scale, which requires continuous monitoring and validation at
unprecedented levels. This is why crucial DevOps practices for delivering
software at scale, like continuous integration and continuous delivery (CI/CD),
will play a central role in providing a robust framework for engineering leaders
to navigate the complexities of delivering AI-powered software — therefore
turning these technological challenges into opportunities for innovation and
competitive advantage. Just as software teams have honed practices for getting
reliable, observable, available applications safely and quickly into customers’
hands at scale, AI-powered software is yet again evolving these methods. We’re
experiencing a paradigm shift from the deterministic outcomes we’ve built
software development practices around to a world with probabilistic outcomes.
This complexity throws a wrench in the conventional yes-or-no logic that has
been foundational to how we’ve tested software, requiring developers to navigate
a variety of subjective outcomes.
Generative AI – Examining the Risks and Mitigations
In working with AI, we should be helping executives in the companies we are
working with to understand these risks and also the potential applications and
innovations that can come from Generative AI. That is why it is essential that
we take a moment now to develop a strategy for dealing with Generative AI. By
developing a strategy, you will be well positioned to reap the benefits from the
capabilities, and will be giving your organization a head-start in managing the
risks. When looking at the risks, companies can feel overwhelmed or decide that
it represents more trouble than they are willing to accept and may take the
stance of banning GenAI. Banning GenAI is not the answer, and will only lead to
a bypassing of controls and more shadow IT. So, in the end, they will use the
technology but won’t tell you. ... AI risks can be broadly categorized into
three types: Technical, Ethical, and Social. Technical risks refer to the
potential failures or errors of AI systems, such as bugs, hacking, or
adversarial attacks. Ethical risks refer to the moral dilemmas or conflicts that
arise from the use or misuse of AI, such as bias, discrimination, or privacy
violations. Social risks refer to the impacts of AI on human society and
culture, such as unemployment, inequality, or social unrest.
Quote for the day:
"In the end, it is important to remember
that we cannot become what we need to be by remaining what we are." --
Max De Pree
No comments:
Post a Comment