The Challenge of Continuous Delivery in Distributed Environments
Most teams have insufficient insight into the current environment at each
endpoint; therefore, failures take time to investigate, and often unique tweaks
and fixes are needed to handle each change in the state of the distributed
system. That’s why DevOps engineers are doing so much hand-coding. Engineers are
finding they must stop the normal CI/CD flow, investigate what part of an
endpoint infrastructure is not running, and then make manual tweaks to the
software and deployment code to compensate for the change. Here’s the thing:
there will always be changes to the system. Infrastructure environments never
stay static, and therefore a lot of “continuous deployment” systems aren’t
really continuous at all. Because DevOps engineers don’t always know the state
of each endpoint environment in a distributed system, the CI/CD pipeline can’t
possibly be adaptive enough. In the end, the process of ensuring continuous
deployment in distributed environments can be extremely burdensome and
complicated, slowing the pace of business innovation.
Confessions of a CTO
To truly ensure the organization’s stability, CTOs need to pay as much attention
to the seemingly smaller tasks as they do the big transformational changes. This
starts with having a rigorous diligent process by understanding where the
business is today and looking in-depth for any weak spots. To do this, CTOs need
to look towards the specialist solutions provided by the right vendor. Adoption
of a configuration management tool can allow CTOs to have oversight of the whole
IT suite, which is able to identify and track changes against a defined set of
policies and flag any deviances for rectification. Policies that are devised
from the Center for Internet Security (CIS) guidelines mean that CTOs have an
established standard of security measures to work with, facilitating visibility
and control to make required changes and pursue a continuous improvement
strategy by achieving best practice configuration. For critical legacy
applications that need to make the successful move to a newer operating system
version, application compatibility packaging can allow for them to be
transplanted to an on-prem, hybrid or cloud system without the need for any code
modifications.
Better and faster: Organizational agility for the public sector
Despite the promise agile methodologies hold for the public sector, certain
characteristics can make government entities a difficult fit for the agile
model. Government budgets tend to follow longer time horizons—often
annual—than agile cadences; internal competition for funding between agencies
for a fixed pool of funding can discourage collaboration across government;
and because the returns on investments in change are often dispersed within
the government and to the public, it can be difficult to motivate employees to
work for an upside they cannot necessarily see or experience. The public
sector’s hierarchical structure—and its accompanying culture and ways of
working—can also make implementing agile methodologies, such as flat
organizations and fast iterations, difficult. ... Agile operating models
configure teams based on facilitating outcomes instead of on function and
expertise. This orientation can boost productivity and engagement by limiting
handoffs between functional silos and focusing a wider array of skills on a
shared objective.
Software Architecture: It Might Not Be What You Think It Is
Architecting modern software applications is a fundamentally explorative
activity. Teams building today’s applications encounter new challenges every
day: unprecedented technical challenges as well as providing customers with
new ways of solving new and different problems. This continuous exploration
means that the architecture can’t be determined up-front, based on past
experiences; teams have to find new ways of satisfying quality requirements.
... Some decisions will, inevitably and unavoidably, create technical debt;
for example, the decision to meet reliability goals by using a SQL database
has some side effects on technical debt (see Figure 1). The now long-past “Y2K
problem” was a conscious decision that developers made at the time that
reduced data storage, memory use, and processing time needs by not storing
century data as part of standard date representations. The problem was that
they didn’t expect the applications to last so long, long after those
constraints became irrelevant.
Decentralizing the grid: Operators test blockchain solutions
Digital identity enables greater cybersecurity and data ownership. While
this use case speaks volumes about how the future of the energy market may
take shape, the application of DIDs ultimately enables better cybersecurity
for grid operators. For instance, when compared with traditional Web1 or Web2
approaches, Morris explained that most grid operators use a centralized
database to manually enter information about sensors or hardware located on
utilities within their network. Yet, such an approach could allow for grid
operators to collect user data and even gain control of those sensors. “This
level of centralization is a cybersecurity risk, which is why our solution
with Stedin also proves to be a cybersecurity application,” Morris remarked.
Jongepier added that Stedin was indeed looking to raise the bar on its
cybersecurity. “Blockchain is effective for this because it provides the
ground rules for utilizing decentralized identifiers for Stedin’s IoT assets,
serving as a solution for raising the bar on security.”
Neglecting The IAM Process Is Fighting A Losing Battle To Achieve Operational Excellence
The IAM process is a critical base for secure, cost-effective and efficient
business operations. The foundation of IAM is comprised of the process first,
followed by people, then technology. The spotlight on zero trust has witnessed
sizeable traction, but most do not realize that to get that model off the
ground, the identity process plays a vital role. There is no zero-trust model
without a rock-solid identity process. Complex access permissions, loose
processes within access management and insider threats are the most common
reasons for a breach. A study sponsored by the Identity Defined Security
Alliance found that 99% of security and identity professionals believed that
identity-related breaches were preventable. And yes, it is preventable. Can
you imagine not setting up a process to revoke access of a disgruntled
employee or even someone gullible immediately after the employment is
discontinued? The longer it takes to revoke access because there is no set
protocol or process, the higher the chances of the organization being
exposed.
Edge computing moves toward full autonomy
Tung uses the term "phygital" to describe the result when digital practices
are applied to physical experiences, such as in the case of autonomous
management of edge data centers. "We see creating highly personalized and
adaptive phygital experiences as the ultimate goal," she notes. "In a phygital
world, anyone can imagine an experience, build it and scale it." In an edge
computing environment that integrates digital processes and physical devices,
hands-on network management is significantly reduced or eliminated to the
point where network failure and downtime is automatically detected and
resolved, and configurations are applied consistently across the
infrastructure, making scaling simpler and faster. Automatic data quality
control is another potential benefit. "This involves a combination of sensor
data, edge analytics, or natural language processing (NLP) to control the
system and to deliver data on-site," Gallina says. Yet another way an
autonomous edge environment can benefit enterprises is with “zero touch”
remote hardware provisioning remotely at scale, with the OS and system
software downloaded automatically from the cloud.
Who is responsible for Cloud Native Security?
In the past, application developers and infrastructure staff worked in
separate arenas. Sparring between the two was all too common. Today that
boundary is blurred, with the work being shared between the various
stakeholders. With respect to security, this is referred to as “shifting
left”; that is, moving security testing efforts earlier – from operations to
the development realm. This emergent approach puts increasing security
responsibility on developers. It evolved when companies realized that code
could no longer wait to run in a production environment before being tested
for weaknesses. Rather, it’s far more efficient to test it earlier during
development. The multiplicity of security roles is another aspect of this
process. AppSec, DevSecOps, and product security all share responsibility for
alerts, control, and resolution of various threats that target enterprise
applications. Such significant changes don’t make application development
easier for organizations. In today's more agile development models, where
speed and automation rule, developers are under pressure to build and ship
applications faster than ever.
Exploring the evolving security challenges within the metaverse
Much like the multiple national currencies that already exist in the real
world, the metaverse will use its own currency or cryptocurrency. While crypto
as a digital currency is set to develop over time, it could also lead to
significant increase in “money laundering” attempts within the metaverse’s
virtual economy. As these digital currencies are set to evolve, uncertainties
surrounding their transferability from one metaverse to another and a lack of
provision for secure exchanges between buyers and sellers could lead to the
exploitation of the newly developed financial system by threat actors. ... At
present, the metaverse poses significant security challenges as most of its
users value interconnectivity and their user experience over intrusive online
safety measures. This could exacerbate the security concerns or privacy issues
that already exist within social media. Considering the inherent challenges
imposed by web domains to govern or control areas beyond traditional national
borders, the metaverse could also present itself as an unregulated environment
to cyber criminals.
Meet the four forces shaping your workforce strategy
Four forces have shaped workforce strategies at key moments throughout human
history—and they’re at it again. By understanding how the forces have operated
in the past, you can better prepare your contemporary workforce to weather
tomorrow’s challenges. ... Scarcity also emerges from technological shifts.
For example, automation is creating redundancies in some fields, while a
growing need for workers in advanced and emerging technologies is generating
shortages in others. Demographic trends also help determine how scarce or
plentiful workers are—and have huge economic and social implications. But
scarcity isn’t just about head count or even dealing with the unprecedented
challenges of the “great resignation”— it’s also about the abundance of skills
your people have. For example, your company may have the right experts and
specialists in place, and plenty of workers to fill vital roles. But you may
still face a scarcity problem if your workforce lacks the broad-based skills
it will need to succeed. The company may have a deficit in leadership or
management skills, for example, or decision-making skills, project management
skills, or even interpersonal skills.
Quote for the day:
"Leaders respond & change; the
rest quit and blame." -- Orrin Woodward
No comments:
Post a Comment