How to make the consultant’s edge your own
What actually works, should the organization be led by a braver sort of
leadership team, is a change in the culture of management at all levels. The
change is that when something bad happens, everyone in the organization, from
the board of directors on down, assumes the root cause is systemic, not a person
who has screwed up. In the case of my client’s balance sheet fiasco, the root
cause turned out to be everyone doing exactly what the situation they faced
Right Now required. What had happened was that a badly delayed system
implementation, coupled with the strategic decision to freeze the legacy system
being replaced, led to a cascade of PTFs (Permanent Temporary Fixes to the
uninitiated) to get through month-end closes. The PTFs, being temporary, weren’t
tested as thoroughly as production code. But being permanent, they accumulated
and sometimes conflicted with one another, requiring more PTFs each month to get
everything to process. The result: Month ends did close, nobody had to tell the
new system implementation’s executive sponsor about the PTFs and the risks they
entailed, and nobody had to acknowledge that freezing the legacy system had
turned out to be a bad call.
SBOM Everywhere: The OpenSSF Plan for SBOMs
The SBOM Everywhere working group will focus on ensuring that existing SBOM
formats match documented use cases and developing high-quality open source tools
to create SBOM documents. Although some of this tooling exists today, more
tooling will need to be built. The working group has also been tasked with
developing awareness and education campaigns to drive SBOM adoption across open
source, government and commercial industry ecosystems. Notably, the U.S. federal
government has taken a proactive stance on requiring the use of SBOMs for all
software consumed and produced by government agencies. The Executive Order on
Improving the Nation’s Cybersecurity cites the increased frequency and
sophistication of cyberattacks as a catalyst for the public and private sectors
to join forces to better secure software supply chains. Among the mandates is
the requirement to use SBOMs to enhance software supply chain security. For
government agencies and the commercial software vendors who partner and sell to
them, the SBOM-fueled future is already here.
Cybersecurity pros spend hours on issues that should have been prevented
“Security is everyone’s job now, and so disconnects between security and
development often cause unnecessary delays and manual work,” said Invicti chief
product officer Sonali Shah. “Organizations can ease stressful overwork and
related problems for security and DevOps teams by ensuring that security is
built into the software development lifecycle, or SDLC, and is not an
afterthought,” Shah added. “Application security scanning should be automated
both while the software is being developed and once it is in production. By
using tools that offer short scan times, accurate findings prioritized by
contextualized risk and integrations into development workflows, organizations
can shift security left and right while efficiently delivering secure code.”
When it comes to software development, innovation and security don’t need to
compete, according to Shah. Rather, they’re inherently linked. “When you have a
proper security strategy in place, DevOps teams are empowered to build security
into the very architecture of application design,” Shah said.
SmartNICs power the cloud, are enterprise datacenters next?
For all the potential SmartNICs have to offer, there remains substantial
barriers to overcome. The high price of SmartNICs relative to standard NICs
being one of many. Networking vendors have been chasing this kind of I/O offload
functionality for years, with things like TCP offload engines, Kerravala said.
"That never really caught on and cost was the primary factor there." Another
challenge for SmartNIC vendors is the operational complexity associated with
managing a fleet of SmartNICs distributed across a datacenter or the edge.
"There is a risk here of complexity getting to the point where none of this
stuff is really usable," he said, comparing the SmartNIC market to the early
days of virtualization. "People were starting to deploy virtual machines like
crazy, but then they had so many virtual machines they couldn't manage them," he
said. "It wasn't until VMware built vCenter, that companies had one unified
control plane for all their virtual machines. We don't really have that on the
SmartNIC side." That lack of centralized management could make widespread
deployment in environments that don't have the resources commanded by the major
hyperscalers a tough sell.
Fantastic Open Source Cybersecurity Tools and Where to Find Them
Organizations benefit greatly when threat intelligence is crowdsourced and
shared across the community, said Sanjay Raja, VP of product at Gurucul. "This
can provide immediate protection or detection capabilities," he said. “While
reducing the dependency on vendors who often do not provide updates to
systems, for weeks or even months.” For example, CISA has an Automated
Indicator Sharing platform. Meanwhile in Canada, there's the Canadian Cyber
Threat Exchange. "These platforms allow for the real-time exchange and
consumption of automated, machine-readable feeds," explained Isabelle
Hertanto, principal research director in the security and privacy practice at
Info-Tech Research Group. This steady stream of indicators of compromise can
help security teams respond to network security threats, she told Data Center
Knowledge. In fact, the problem isn't the lack of open source threat
intelligence data, but an overabundance, she said. To help data center
security teams cope, commercial vendors are developing AI-powered solutions to
aggregate and process all this information. "We see this capability built into
next generation commercial firewalls and new SIEM and SOAR platforms,"
Hertanto said.
Living better with algorithm
Together with Shah and other collaborators, Cen has worked on a wide range of
projects during her time at LIDS, many of which tie directly to her interest
in the interactions between humans and computational systems. In one such
project, Cen studies options for regulating social media. Her recent work
provides a method for translating human-readable regulations into
implementable audits. To get a sense of what this means, suppose that
regulators require that any public health content — for example, on vaccines —
not be vastly different for politically left- and right-leaning users. How
should auditors check that a social media platform complies with this
regulation? Can a platform be made to comply with the regulation without
damaging its bottom line? And how does compliance affect the actual content
that users do see? Designing an auditing procedure is difficult in large part
because there are so many stakeholders when it comes to social media. Auditors
have to inspect the algorithm without accessing sensitive user data. They also
have to work around tricky trade secrets, which can prevent them from getting
a close look at the very algorithm that they are auditing because these
algorithms are legally protected.
CFO perspectives on leading agile change
In an agile organization, leadership-level priorities cascade down to inform
every part of the business. For this reason, CFOs talked extensively about the
importance of setting up a prioritization framework that is as objective as
possible. Many participants mentioned that it can be challenging to work out
priorities through the QBR process, because different teams lack an
institutional mechanism through which to weigh different work segments against
one another and prioritize between them. Most CFOs agreed that some degree of
direction from the top is required in this area. One CFO said he thinks of his
organization as a “prioritization jar”: leadership puts big stones in the jar
first and then fills in the spaces with sand. These prioritization “stones”
might be six key projects identified by management, or they might be 20 key
initiatives chosen through a mixture of leadership direction and feedback from
tribes. A second challenge emerged regarding shifting resources among teams or
clusters responsible for individual initiatives. When asked what they would do
if they had a magic wand, several CFOs said they need better ways to
reallocate resources at short notice.
Friend Or Foe: Delving Into Edge Computing & Cloud Coputing
One of the most significant features of edge computing is decentralization.
Edge computing allows for using resources and communication technologies via a
single computing infrastructure and the transmission channel. Edge computing
is a technology that optimizes computational needs by utilizing the cloud at
its edge. When it comes to gathering data or when someone does a particular
action, real-time execution is possible wherever there is a need for that. The
two most significant advantages of edge computing are increased performance
and lower operational expenses. ... The first thing to realize is that cloud
computing and edge computing are not rival technologies. They aren’t different
solutions to the same problem; rather, they’re two distinct ways of addressing
particular problems. Cloud computing is ideal for scalable applications that
must be ramped up or down depending on demand. Extra resources can be
requested by web servers, for example, to ensure smooth service without
incurring any long-term hardware expenses during periods of heavy server
usage.
Why AI and autonomous response are crucial for cybersecurity
Remote work has become the norm, and outside the office walls, employees are
letting down their personal security defenses. Cyber risks introduced by the
supply chain via third parties are still a major vulnerability, so
organizations need to think about not only their defenses but those of their
suppliers to protect their priority assets and information from infiltration
and exploitation. And that’s not all. The ongoing Russia-Ukraine conflict has
provided more opportunities for attackers, and social engineering attacks have
ramped up tenfold and become increasingly sophisticated and targeted. Both
play into the fears and uncertainties of the general population. Many security
industry experts have warned about future threat actors leveraging AI to
launch cyber-attacks, using intelligence to optimize routes and hasten their
attacks throughout an organization’s digital infrastructure. “In the modern
security climate, organizations must accept that it is highly likely that
attackers could breach their perimeter defenses,” says Steve Lorimer, group
privacy and information security officer at Hexagon.
Service Meshes Are on the Rise – But Greater Understanding and Experience Are Required
We explored the factors influencing people’s choices by asking which features
and capabilities drive their organization’s adoption of service mesh. Security
is a top concern, with 79% putting their faith in techniques such as mTLS
authentication of servers and clients during transactions to help reduce the
risk of a successful attack. Observability came a close second behind
security, at 78%. As cloud infrastructure has grown in importance and
complexity, we’ve seen a growing interest in observability to understand the
health of systems. Observability entails collecting logs, metrics, and traces
for analysis. Traffic management came third (62%). This is a key consideration
given the complexity of cloud native that a service mesh is expected to help
mitigate. ... Potential issues here include latency, lack of bandwidth,
security incidents, the heterogeneous composition of the cloud environment,
and changes in architecture or topology. Respondents want a service mesh to
overcome these networking and in-service communications challenges.
Quote for the day:
"To command is to serve : nothing more
and nothing less." -- Andre Marlaux
No comments:
Post a Comment