Start-ups, established enterprises, and tech: what is the cost of change?
There is no tech stack that will give you a leg-up because it’s new and
different from what everybody else is using. The only thing that will give you a
leg-up is something that everybody already knows how to use. But what about
“this is the best tool for the job”? That way of thinking can be a myopic view
of both the words ‘best’ and ‘job.’ The job is keeping the organisation in
business. The best tool will occupy the ‘least worst’ position for as many
problems as possible. Pragmatism wins the day. Build things that are simple,
build things that are boring, and build things that are battle-tested. Isolate
things that are specifically tied to one area of your business and make sure all
of that is together. When you must make decisions about encapsulating or
abstracting it, it’s all contained. Then you can define boundaries. Make sure
you define those boundaries within a simple code base. Think about this in terms
of cheap vs expensive: it’s cheap to stick to those boundaries. Understand your
boundaries, be clean with them and adjust them as you’re evolving. And don’t
ever stop! The cost of reshaping a function name, or its position in a code
base, is extremely low relative to the cost of moving things between
services.
How to avoid 4 common zero trust traps (including one that could cost you your job)
The trap most practitioners fall into is the need to understand and define every
identity in their organizations. Initially, this seems simple but then you
realize there are service accounts and machine and application identities. It’s
even more difficult because that identity project has to include permissions and
each application has its own schema for what permissions are granted. There’s no
standardization. Instead, focus on the user accounts. When we start with the
application ecosystems, our intent is to focus on the user and application
boundary. Now if we look at identities, start with interactive logins, i.e.,
users who need to access an account to perform an action. Ensure non-repudiation
by getting rid of generic logins, using certificates and rotating credentials.
... Most boardrooms see zero trust as a way of using any device to be able to
conduct business. That should be the end result of a robust zero trust program.
If it is where you start, you will be overwhelmed with breaches. The purpose of
zero trust is to technically express the fact that you don’t trust any device or
network. You don’t accomplish that by closing your eyes to it.
In Secure Silicon We Trust
With RoT technology, "It's possible to gain a high degree of assurance that
what's expected to be running is actually running," MacDonald explains. The
technology achieves this level of protection using an encrypted instruction set
that is etched into the chip at the time it is manufactured. When the system
boots, the chip checks this immutable signature to validate the BIOS. If
everything checks out the computer loads the software stack. If there's a
problem, it simply won't boot. Secure silicon doesn't directly protect against
all types of threats, but it does ensure that a system is secure at the
foundational level. This is critical because attackers who gain access to the
BIOS or firmware can potentially bypass the operating system and tamper with
encryption and antivirus software, notes Rick Martinez, senior distinguished
engineer in the Client Solutions Group Office of the CTO at Dell Technologies.
"It provides a reliable trust anchor for supply chain security for the platform
or device," Martinez notes. Intel has introduced the SGX chip, which bypasses a
system's OS and virtual machine (VM) layers while altering the way the system
accesses memory. SGX also supports verification of the application and the
hardware it is running.
Finding remote work a struggle? Here's how to get your team back on track
"If you want to support people who are remote working, you cannot be an
old-fashioned leader. That sounds critical, but you can't be the kind of
leader that is saying, 'I don't really like people who are remote working and
I want to know that they're doing stuff', and then always checking that the
green light's on," she says. Evdience from the Harvard Business Review
suggests Dawson is onto something. HBR says business leaders must understand
that being nice to each other and goofing around together is part of the work
we do. The informal interactions at risk in hybrid and remote work are not
distractions; instead; they foster the employee connections that feed
productivity and innovation. Dawson says successful business leaders in the
future will have to be more empathetic. They will have to be unafraid of
asking people how they're getting on. That question will need to be posed in
the right way: rather than checking up on staff to see if they're at their
desks, leaders should have conversations with staff about their feelings and
objectives.
NaaS: Network-as-a-service is the future, but it’s got challenges
Full adoption of NaaS is still in its early days because most enterprise
network functions require physical hardware to transport data to and from
endpoints and the data center or internet. That is a challenge to deliver as a
service. The Layer 4-7 functions are already available in a cloud-delivery
model. Over the next five-plus years, IT teams will increasingly adopt NaaS as
suppliers deliver hybrid offerings that include software, cloud intelligence,
and the option for management of on-premises hardware. These services will be
subscription-based and pay as you go, making networking more of an operational
cost than a capital cost. They will provide centralized management with the
ability to easily add and remove network and security functionality. The
services will enable outsourcing of enterprise network operations to providers
that may include vendors and their partners who provide service level
agreements (SLA) to define uptime and problem-resolution guarantees. Right
now, NaaS is best suited to organizations with a lean-IT philosophy and a need
to provide networking support for at-home and branch locations.
Industrial AI prepares for the mainstream – How asset-intensive businesses can get themselves ready
A future-proof industrial AI infrastructure necessitates the need to lay the
groundwork for industrial AI readiness, requiring collaboration across
industrial environments. In fact, the software, hardware, architecture, and
personnel elements will form the building blocks of the industrial AI
infrastructure. And that infrastructure is what empowers organisations to take
their industrial AI proof-of-concepts and mature them into tangible solutions
that drive ROI. An industrial AI infrastructure needs to accelerate time to
market, build operational flexibility and scalability into AI investments and
harmonise the AI model lifecycle across all applications. Roles, skills, and
training are critical. Executing industrial AI relies on having the right
people in charge. That means making a deliberate effort to cultivate the
skills and approaches needed to create and deploy AI-powered initiatives
organisation-wide. Finally, ethical and responsible AI use is predicated on
transparency, and transparency involving keeping everyone in the loop:
creating clear channels of communication, reliable process documentation and
alignment across all stakeholders.
Operating in an increasingly digitalized world
Consumers have become less cost-conscious and more focused on sustainability,
he said. Those are "top of mind issues. [Consumers] will pick slower shipping
if they see it's good for the environment. They want to support their local
communities so they're shopping more locally." Buyers are also looking for
unique products and "no longer the same old, same old." Merchants have started
creating 3D models of their products, Jaffer said. Digital transformation will
help with environmental sustainability and climate change, Lapiello said.
Organizations will have to fully embrace privacy, cybersecurity and artificial
intelligence, he said. "By 2030, quantum computing will be available in some
shape or form and will be an incredibly disruptive technology," Lapiello said.
"I truly believe the current machine learning generating predictions based on
correlations will become obsolete and will be replaced by causal AI, which is
quite ripe and will allow for better decisions." One of the biggest changes
will be that people will have moved away from using mobile phones to glasses,
Hackl said. "It's not a question of will it happen, but when ... We're 3D
beings in a 3D world and the content you'll consume through these glasses will
have dimensions" that change what we see in our surroundings.
SD-WAN surges over past two years as MPLS plummets
“SD-WAN has dramatically increased in adoption in the past couple of years.
The pandemic slowed roll-outs for a time, but increased interest in adoption.
SD-WAN frees WAN managers to select a broad mix of underlay technologies, and
can also boost performance.” The report aimed to offer a clear picture of how
mid-size to large enterprises are adjusting to emerging WAN technologies,
helping suppliers make more informed decisions. It provided an in-depth
analysis based on the experiences of WAN managers from 125 companies, with
those represented in the survey having a median revenue of $10bn and a range
of IT managers covering the design, sourcing and management of US national,
regional and global corporate wide-area computer networks. The standout
finding for the study was that 43% of enterprises surveyed had installed
SD-WAN in 2020, compared with just 18% in 2018. Driving this growth – and key
motivators for WAN managers pursuing SD-WAN, according to the survey – were
increasing site capacity and using alternative access solutions. Two-fifths of
respondents preferred a co-managed SD-WAN setup and, on top of this,
enterprises were running MPLS at an average of 71% of sites during the
three-year period of 2018-2020.
Applying CIAM Principles to Employee Authentication
To enhance employee authentication for system access, some organizations,
including Navy Federal Credit Union and the travel portal Priceline, are
adopting customer identity and access management, or CIAM, procedures for
their workforces. Those include dynamic authorization, continuous
authentication and the use of various forms of biometrics. "With the death of
user ID and password, I am trying to create digital layers of authentication
on the workforce side," Malta says. "We are looking to be able to let the
hybrid workforce ‘inside our network’ in a very frictionless way." Joe
Dropkin, principal server engineer at Priceline, says he's been applying the
concept of CIAM to employee authentication because of the shift toward
applications and data storage in the cloud. “We did not want our employees to
go through multiple layers of authentication to SAAS applications. The users
now have single 'pane of glass' to look at,” he says. Priceline employees no
longer have to log in multiple times to access different applications. Once
they're authenticated, using multiple layers, they gain access to all
appropriate systems, Dropkin says.
Beyond MITRE ATT&CK: The Case for a New Cyber Kill Chain
MITRE ATT&CK, by contrast, is a more modern approach focused on TTPs. It
seeks to classify attackers' goals, tasks, and steps; as such, it is a much
more comprehensive approach to modeling an attack. That said, MITRE ATT&CK
also has its shortcomings, notably when a security team is using an XDR
platform. In an automated detection scenario, defenders might see the symptoms
without knowing the exact root cause, such as suspicious user behavior, and
such scenarios are harder to fit into MITRE ATT&CK. Stellar Cyber, a
developer of XDR technology, argues for the creation of a new framework. It
envisions an XDR framework/kill chain leveraging MITRE ATT&CK on the known
root causes and attackers' goals but going further regarding other data
sources, such as anomalous user behavior. There is precedent for an individual
vendor feeling a need to extend or amend frameworks. FireEye came up with its
own version of the kill chain, which put more emphasis on attackers' ability
to persist threats, while endpoint detection and response (EDR) heavyweight
CrowdStrike uses MITRE ATT&CK extensively but provides a set of
nonstandard categories to cover a broader range of scenarios.
Quote for the day:
"Don't be buffaloed by experts and
elites. Experts often possess more data than judgement." --
Colin Powell
No comments:
Post a Comment