What to Look for in a Network Detection and Response (NDR) Product
NDR's practical limitation lies in its focus on the network layer, Orr says.
Enterprises that have invested in NDR also need to address detection and
response for multiple security layers, ranging from cloud workloads to endpoints
and from servers to networks. "This integrated approach to cybersecurity is
commonly referred to as Extended Detection and Response (XDR), or Managed
Detection and Response (MDR) when provided by a managed service provider," he
explains. Features such as Intrusion Prevention Systems (IPS), which are
typically included with firewalls, are not as critical because they are already
delivered via other vendors, Tadmor says. "Similarly, Endpoint Detection and
Response (EDR) is being merged into the broader XDR (Extended Detection and
Response) market, which includes EDR, NDR, and Identity Threat Detection and
Response (ITDR), reducing the standalone importance of EDR in NDR solutions."
... Look for vendors that are focused on fast, accurate detection and response,
advises Reade Taylor, an ex-IBM Internet security systems engineer, now the
technology leader of managed services provider Cyber Command.
AI In Business: Elevating CX & Energising Employees
Using AI in CX certainly eases business operations, but it’s ultimately a win
for the customer too. As AI collects, analyses, and learns from large volumes of
data, it delivers new worlds of actionable insights that empower businesses to
get personal with their customer journeys. In the past years, businesses have
tried their best to personalise the customer experience – but working with a
handful of generic personas only gets you so far. Today’s AI, however, has the
power to unlock next-level insights that help businesses discover customers’
expectations, wants, and needs so they can create individualised experiences on
a 1-2-1 level. ... In human resources, AI further presents opportunities to help
employees. For example, AI can elevate standard on-the-job training by creating
personalised learning and development programmes for employees. Meanwhile, AI
can also help job hunters find opportunities they may have overlooked. For
example, far too many jobseekers have valuable and transferable skills but lack
the experience in the right business vertical to land a job. According to NIESR,
63% of UK graduates are mismatched in this way.
The benefits and pitfalls of platform engineering
The first step of platform engineering is to reduce tool sprawl by making
clear what tools should make up the internal developer platform. The next step
is to reduce context-switching between these tools which can result in
significant time loss. By using a portal as a hub, users can find all of the
information they need in one place without switching tabs constantly. This
improves the developer experience and enhances productivity. ... In terms
of scale, platform engineering can help an organization to better understand
their services, workloads, traffic and APIs and manage them. This can come
through auto-scaling rules, load balancing traffic, using TTL in self-service
actions, and an API catalog. ... Often, as more platform tools are added and
as more microservices are introduced - things become difficult to track - and
this leads to an increase in deploy failures, longer feature
development/discovery times, and general fatigue and developer dissatisfaction
because of the unpredictably of bouncing around different platform tools to
perform their work. There needs to be a way to track what’s happening
throughout the SDLC.Adoption - how (and is it possible) to force developers to
change the way they work
The irreversible footprint: Biometric data and the urgent need for right to be forgotten
The absence of clear definitions and categorisations of biometric data within
current legislation highlights the need for comprehensive frameworks that
specifically define rules governing its collection, storage, processing and
deletion. Established legislation like the Information Technology Act, which
were supplemented by subsequent ‘Rules’ for various digital governance
aspects, can be used as a precedent. For instance, the 2021 Information
Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules were
introduced to establish a robust complaint mechanism for social media and OTT
platforms, addressing inadequacies in the Parent Act. To close the current
regulatory loopholes, a separate set of rules governing biometric data under
the Digital Personal Data Protection Act, 2023 should be considered. ... The
‘right to be forgotten’ must be a basic element of it, recognising people's
sovereignty over their biometric data. Such focused regulations would not just
bolster the safeguarding of biometric information, but also ensure compliance
and accountability among entities handling sensitive data. Ultimately, this
approach aims to cultivate a more resilient and privacy-conscious ecosystem
within our dynamic digital landscape.
6 IT risk assessment frameworks compared
ISACA says implementation of COBIT is flexible, enabling organizations to
customize their governance strategy via the framework. “COBIT, through its
insatiable focus on governance and management of enterprise IT, aligns the IT
infrastructure to business goals and maintains strategic advantage,” says
Lucas Botzen, CEO at Rivermate, a provider of remote workforce and payroll
services. “For governance and management of corporate IT, COBIT is a must,”
says ... FAIR’s quantitative cyber risk assessment is applicable across
sectors, and now emphasizes supply chain risk management and securing
technologies such as internet of things (IoT) and artificial intelligence
(AI), Shaw University’s Lewis says. Because it uses a quantitative risk
management method, FAIR helps organizations determine how risks will affect
their finances, Fuel Logic’s Vancil says. “This method lets you choose where
to put your security money and how to balance risk and return best.” ...
Conformity with ISO/IEC 27001 means an organization has put in place a system
to manage risks related to the security of data owned or handled by the
organization. The standard, “gives you a structured way to handle private
company data and keep it safe,” Vancil says.
Why is server cooling so important in the data center industry?
AI and other HPC sectors are continuing to drive up the power density of
rack-mount server systems. This increased computer means increased power draw,
which leads to increased heat generation. Removing that heat from the server
systems in turn requires more power for high CFM (cubic feet per minute) fans.
Liquid cooling technologies, including rack-level-cooling and immersion, can
improve the efficiency of the heat removal from server systems, requiring less
powerful fans. In turn, this can reduce the overall power budget of a rack of
servers. When extrapolating this out across large sections of a data center
footprint, the savings can add up significantly. When you consider some of the
latest Nvidia rack offerings require 40KW or more, you can start to see how
the power requirements are shifting to the extreme. For reference, it’s not
uncommon for a lot of electronic trading co-locations to only offer 6-12KW
racks, which are sometimes operated half-empty due to the servers requiring
more power draw than the rack can provide. These trends are going to force
data centers to adopt any technology that can reduce the power burden on not
only their own infrastructure but also the local infrastructure that supplies
them.
Cutting the High Cost of Testing Microservices
Given the high costs associated with environment duplication, it is worth
considering alternative strategies. One approach is to use dynamic environment
provisioning, where environments are created on demand and torn down when no
longer needed. This method can help optimize resource utilization and reduce
costs by avoiding the need for permanently duplicated setups. This can keep
costs down but still comes with the trade-off of sending some testing to
staging anyway. That’s because there are shortcuts that we must take to spin
up these dynamic environments like using mocks for third-party services. This
may put us back at square one in terms of testing reliability, that is how
well our tests reflect what will happen in production. At this point, it’s
reasonable to consider alternative methods that use technical fixes to make
staging and other near-to-production environments easier to test on. ...
While duplicating environments might seem like a practical solution for
ensuring consistency in microservices, the infrastructure costs involved can
be significant. By exploring alternative strategies such as dynamic
provisioning and request isolation, organizations can better manage their
resources and mitigate the financial impact of maintaining multiple
environments.
The Cybersecurity Workforce Has an Immigration Problem
Creating a skilled immigration pathway for cybersecurity will require new
policies. Chief among them is a mechanism to verify that applicants have
relevant cybersecurity skills. One approach is allowing people to identify
themselves by bringing forth previously unidentified bugs. This strategy is a
natural way to prove aptitude and has the additional benefit of requiring no
formal expertise or expensive testing. However, it would also require safe
harbor provisions to protect individuals from prosecution under the Computer
Fraud and Abuse Act. ... The West’s adversaries may also play a
counterintuitive role in a cybersecurity workforce solution. Recent work from
Eugenio Benincasa at ETH Zurich highlights the strength of China’s
cybersecurity workforce. How many Chinese hackers might be tempted to
immigrate to the West, if invited, for better pay and greater political
freedom? While politically sensitive, a policy that allows foreign-trained
cybersecurity experts to immigrate to the US could enhance the West’s
workforce while depriving its adversaries of offensive talent. At the same
time, such immigration programs must be measured and targeted to avoid adding
tension to a world in which geopolitical conflict is already rising.
Cross-Cloud: The Next Evolution in Cloud Computing?
The key difference between cross-cloud and multicloud is that cross-cloud
spreads the same workload across-clouds. In contrast, multicloud simply means
using more than one public cloud at the same time — with one cloud hosting
some workloads and other clouds hosting other workloads. ... That said, in
other respects, cross-cloud and multicloud offer similar benefits — although
cross-cloud allows organizations to double down on some of those benefits. For
instance, a multicloud strategy can help reduce cloud costs by allowing you to
pick and choose from among multiple clouds for different types of workloads,
depending on which cloud offers the best pricing for different types of
services. One cloud might offer more cost-effective virtual servers, for
example, while another has cheaper object storage. As a result, you use one
cloud to host VM-based workloads and another to store data. You can do
something similar with cross-cloud, but in a more granular way. Instead of
having to devote an entire workload to one cloud or another depending on which
cloud offers the best overall pricing for that type of workload, you can run
some parts of the workload on one cloud and others on a different
cloud.
Will We Survive The Transitive Vulnerability Locusts
The issue today is that modern software development resembles constructing
with Legos, where applications are built using numerous open-source
dependencies — no one writes frameworks from scratch anymore. With each
dependency comes the very real probability of inherited vulnerabilities. When
unique applications are then built on top of those frameworks, it turns into a
patchwork of potential vulnerability dependencies that are stitched together
with our own proprietary code, without any mitigation of the existing
vulnerabilities. ... With a proposed solution, it would be easy to conclude
that we have fixed the problem. Given this vulnerability, we could just patch
it and be secure, right? But after we updated the manifest file, and
theoretically removed the transitive vulnerability, it still showed up in the
SCA scan. After two tries at remediating the problem, we recognized that two
variable versions were present. Using the SCA scan, we determined the root
cause of the vulnerability had been imported and used. This is a fine manual
fix, but reproducing this process manually at scale is near-impossible. We
therefore decided to test whether we could group CVE behavior by their common
weakness enumeration (CWE) classification.
Quote for the day:
"You are the only one who can use your
ability. It is an awesome responsibility." -- Zig Ziglar
No comments:
Post a Comment