SD-WAN and Cybersecurity: Two Sides of the Same Coin
SD-WAN is a natural extension of NGFWs that can leverage these devices’
content/context awareness and deep packet inspection. The same classification
engines used by NGFWs to drive security decisions can also determine the best
links to send traffic over. These engines can also guide queueing priorities,
which in turn enables fine-grained quality-of-service (QoS) controls. ...
Centralized cloud management is key to enabling incremental updates of these new
features. Further, flexible policy-driven routing enables service chaining of
new security features in the cloud rather than building these features into the
SD-WAN customer premises equipment (CPE). For example, cloud-based services for
advanced malware detection, secure web gateways, cloud-access security brokers,
and other security features can be enabled via the SD-WAN platform, seamlessly
bringing these and other next-gen security functions across the enterprise. The
coordination between the cloud-based SD-WAN service and the on-premises SD-WAN
CPE allows new security applications to benefit from both the convenience and
proximity of an on-site device and the near-infinitely scalable computing power
of the cloud.
Introducing AlloyDB for PostgreSQL: Free yourself from expensive, legacy databases
As organizations modernize their database estates in the cloud, many struggle to
eliminate their dependency on legacy database engines. In particular, enterprise
customers are looking to standardize on open systems such as PostgreSQL to
eliminate expensive, unfriendly licensing and the vendor lock-in that comes with
legacy products. However, running and replatforming business-critical workloads
onto an open source database can be daunting: teams often struggle with
performance tuning, disruptions caused by vacuuming, and managing application
availability. AlloyDB combines the best of Google’s scale-out compute and
storage, industry-leading availability, security, and AI/ML-powered management
with full PostgreSQL compatibility, paired with the performance, scalability,
manageability, and reliability benefits that enterprises expect to run their
mission-critical applications. As noted by Carl Olofson, Research Vice
President, Data Management Software, IDC, “databases are increasingly shifting
into the cloud and we expect this trend to continue as more companies digitally
transform their businesses. ...”
Visualizing the 5 Pillars of Cloud Architecture
If you understand your cloud infrastructure, you can more confidently ensure
your customers can rely on your organization. With the ability to constantly
meet your workload demands and quickly recover from any failures, your
customers can count on you to consistently meet their service needs with
little interruption to their experience. A great way to increase reliability
in your cloud infrastructure is to set key performance indicators (KPIs) that
allow you to both monitor your cloud and alert the proper team members when
something within the architecture fails. Using a cloud visualization platform
to filter your cloud diagrams and create different visuals of current, optimal
and potential cloud infrastructure allows you to compare what is currently
happening in the cloud to what should be happening. ... Many factors can
impact cloud performance, such as the location of cloud components, latency,
load, instance size and monitoring. If any of these factors become a problem,
it’s essential to have procedures in place that result in minimal deficiencies
in performance.
Zero Trust Does Not Imply Zero Perimeter
Don’t get me wrong, the concept of trusting the perimeter is fairly
old-school/outdated and does come into conflict with more modern “cloud
native” approaches. Remote users will also have issues with latency,
especially if you require the users to VPN to your on-premises network and
finally establish connectivity with the cloud. The theoretical modern approach
is to not trust that perimeter. This doesn’t mean you have to get rid of it,
but rather it’s not the default, since increasingly the perimeter is becoming
more porous and ill-defined. This is as opposed to when moving to a
“zero-trust” model, where everything needs to be proven for both the user
identity and device prior to any data, application, assets and/or services
(DAAS) being permitted to communicate to any services. Going further down
memory lane, back in the day the perimeter used to mean that everything was
located within your “castle” and perimeter-based system access was “all or
nothing” by default. Once users were in, they were in, which also applies to
any other type of actor, including malicious actors. Once the perimeter was
breached, the malicious actor effectively had unlimited access to everything
within the perimeter.
As Inflation Skyrockets, Is Now the Time to Pull Back on New IT Initiatives?
There are two big risks associated with pulling back, says Ken Englund,
technology sector leader at business advisory firm EY Americas. Pulling back
on projects may increase the risk of IT talent turnover, he warns. “Pausing or
changing priorities for tactical, short-term reasons may encourage talent to
depart for opportunities on other companies' transformational programs.” Also,
given current inflationary pressure, “the cost to restart a project may be
materially more expensive in the future than it is to complete today.” There's
no doubt that pulling back on IT spend saves money over the short term, but
short-sighted savings could come at the cost of long-term success. “If an
organization must look to cut budgets, start with a strategic review of all
projects, identifying which have the greatest possible impact and least amount
of risk,” Lewis-Pinnell advises. Examine each project's total cost of
ownership and rank them by cost and impact. Strategic selection of IT
initiatives can help IT leaders manage through inflationary challenges. “Don’t
be afraid to cut projects that aren’t bringing you enough benefit,” she adds.
Cyber-Espionage Attack Drops Post-Exploit Malware Framework on Microsoft Exchange Servers
CrowdStrike's analysis shows the modules are designed to run only in-memory to
reduce the malware's footprint on an infected system — a tactic that
adversaries often employ in long-running campaigns. The framework also has
several other detection-evasion techniques that suggest the adversary has deep
knowledge of Internet Information Services (IIS) Web applications. For
instance, CrowdStrike observed one of the modules leveraging undocumented
fields in IIS software that are not intended to be used by third-party
developers. Over the course of their investigation of the threat, CrowdStrike
researchers saw evidence of the adversaries repeatedly returning to
compromised systems and using IceApple to execute post-exploitation
activities. Param Singh, vice president of CrowdStrike's Falcon OverWatch
threat-hunting services, says IceApple is different from other
post-exploitation toolkits in that it is under constant ongoing development
even as it is being actively deployed and used.
Zero Trust, Cloud Adoption Drive Demand for Authorization
Hutchinson advises enterprises to leverage a model that combines traditional
coarse-grained role-based access rules, or RBAC, with a collection of
finer-grained attributes-based access rules, or ABAC, that can describe not
only the consumer of a service but also the data, system, environment and
function. "While traditional RBAC models are easier for developers and
auditors to understand, they usually result in role explosion as the system
struggles to provide finer-grained authorization. ABAC addresses that
fine-grained need but sacrifices both management and understanding as the vast
array of elements necessary for such a system makes organizing the data
extremely complex," says Hutchinson. He adds: "A complex policy rule might
say: 'A customer's transactional data can only be viewed via a secure device
at a bank branch by an accredited teller who is from the same country of
origin as the customer.' Instead of creating a plethora of new roles to cover
all of the different possible combinations, I can use the teller role while
also checking attributes that will provide device profile, location,
accreditation status and country of origin.
The Cloud Native Community Needs to Talk about Testing
After getting feedback from the community, including DevOps and QA engineers,
the general consensus I received was that it’s easy to tell that cloud native
is a developing field that is still establishing its best practices. We can
look into different examples of areas that are still maturing. Not that long
ago, we started to hear about DevOps, which brought the concept of shorter and
more efficient release cycles, which today feels like a normal standard. More
recently, we saw GitOps following the same tracks, and we are seeing that more
teams are using Git to manage their infrastructure. It’s my belief that cloud
native testing will soon follow suit, where teams will not see testing as a
burden or an extra amount of work that is only “nice to have” but something
that is part of the process that will save them a lot of development time. I’m
sure all of you reading this are tech enthusiasts like me and probably have
been building and shipping products for quite some time, and I’m also sure
many of you noticed that there are major challenges with integration testing
on Kubernetes, especially when it comes to configuring tests in your
continuous integration/continuous delivery (CI/CD) pipelines to follow a
GitOps approach.
Hybrid work: Best practices in times of uncertainty
Humans are social creatures who require some contact with others, but
determining the right balance between proximity and contact in the virtual
workplace is difficult – too much contact can be exhausting, and too little
can lead to isolation. Work to find a balance that can help support your staff
as they navigate the nuanced world of remote work. It’s also important to
adopt a blended approach to technology and physical space. A combination of
co-working spaces and telepresence tools can be just what you need to
facilitate contact and collaboration among employees. This allows for an open
environment where people can both collaborate and decompress in their own way
while also bringing a sense of connection that may be impossible to achieve in
a virtual environment. ... It’s not easy to develop policies that address both
business and human needs in remote and hybrid work environments, but one thing
remains certain: flexibility paired with autonomy is essential for success.
CIOs play a critical role in creating an environment of flexibility and
autonomy for staff members – one that can help support their professional
development while also fostering increased satisfaction and success.
10 best practices to reduce the probability of a material breach
Cybersecurity is as much about humans as it is about technology. Organizations
see fewer breaches and faster times to respond when they build a “human layer”
of security, create a culture sensitive to cybersecurity risks, build more
effective training programs, and develop clear processes for recruiting and
retaining cyber staff. ... Organizations with no breaches invest in a mix of
solutions, from the fundamentals such as email security and identity
management, to more specialized tools such as security information and event
management systems (SIEMs). These organizations are also more likely to take a
multi-layered, multi-vendor security approach to monitor and manage risks
better through a strong infrastructure. ... With digital and physical worlds
converging, the attack surfaces for respondents are widening. Organizations
that prioritize protection of interconnected IT and OT assets experience fewer
material breaches and faster times to detect and respond.
Quote for the day:
"Good leaders make people feel that
they're at the very heart of things, not at the periphery." --
Warren G. Bennis
No comments:
Post a Comment