The Design Patterns for Distributed Systems Handbook
Some people mistake distributed systems for microservices. And it's true –
microservices are a distributed system. But distributed systems do not always
follow the microservice architecture. So with that in mind, let's come up with a
proper definition for distributed systems: A distributed system is a computing
environment in which various components are spread across multiple computers (or
other computing devices) on a network. ... If you decide that you do need a
distributed system, then there are some common challenges you will
face:Heterogeneity – Distributed systems allow us to use a wide range of
different technologies. The problem lies in how we keep consistent communication
between all the different services. Thus it is important to have common
standards agreed upon and adopted to streamline the process. Scalability –
Scaling is no easy task. There are many factors to keep in mind such as size,
geography, and administration. There are many edge cases, each with their own
pros and cons. Openness – Distributed systems are considered open if they
can be extended and redeveloped.
Shadow IT is increasing and so are the associated security risks
Gartner found that business technologists, those business unit employees who
create and bring in new technologies, are 1.8 times more likely than other
employees to behave insecurely across all behaviors. “Cloud has made it very
easy for everyone to get the tools they want but the really bad thing is there
is no security review, so it’s creating an extraordinary risk to most
businesses, and many don’t even know it’s happening,” says Candy Alexander, CISO
at NeuEon and president of Information Systems Security Association (ISSA)
International. To minimize the risks of shadow IT, CISOs need to first
understand the scope of the situation within their enterprise. “You have to be
aware of how much it has spread in your company,” says Pierre-Martin Tardif, a
cybersecurity professor at Université de Sherbrooke and a member of the Emerging
Trends Working Group with the professional IT governance association ISACA.
Technologies such as SaaS management tools, data loss prevention solutions, and
scanning capabilities all help identify unsanctioned applications and devices
within the enterprise.
Worker v bot: Humans are winning for now
Ethical and legislative concerns aside, what the average worker wants to know is
if they’ll still have a job in a few years’ time. It’s not a new concern: in
fact, jobs are lost to technological advancements all the time. A century ago,
most of the world’s population was employed in farming, for example.
Professional services company Accenture asserts that 40% of all working hours
could be impacted by generative AI tools — primarily because language tasks
already account for just under two thirds of the total time employees work. In
The World Economic Forum’s (WEF) Future of Jobs Report 2023, jobs such as
clerical or secretarial roles, including bank tellers and data entry clerks, are
reported as likely to decline. Some legal roles, like paralegals and legal
assistants, may also be affected, according to a recent Goldman Sachs report.
... Customer service roles are also increasingly being replaced by chatbots.
While chatbots can be helpful in automating customer service scenarios, not
everyone is convinced. Sales-as-a-Service company Feel offers, among other
services, actual live sales reps to chat with online shoppers.
The Future of Continuous Testing in CI/CD
Continuous testing is rapidly evolving to meet the needs of modern software
development practices, with new trends emerging to address the challenges
development teams face. Three key trends currently gaining traction in
continuous testing are cloud-based testing, shift-left testing and security
testing. These trends are driven by the need to increase efficiency and speed
in software development while ensuring the highest quality and security
levels. Let’s take a closer look at these trends. Cloud-Based Testing:
Continuous testing is deployed through cloud-based computing, which provides
multiple benefits like ease of deployment, mobile accessibility and quick
setup time. Businesses are now adopting cloud-based services due to their
availability, flexibility and cost-effectiveness. Cloud-based testing doesn’t
require coding skills or setup time, which makes it a popular choice for
businesses. ... Shift-Left Testing: Shift-left testing is software testing
that involves testing earlier in the development cycle rather than waiting
until later stages, such as system or acceptance testing.
IT is driving new enterprise sustainability efforts
There’s an additional sustainability benefit to modernizing applications, says
Patel at Capgemini. “Certain applications are written in a way that consumes
more energy.” Digital assessments can help measure the carbon footprint of
internally developed apps, she says. Modern application design is key to using
the cloud efficiently. At Choice Hotels, many components now run as services
that can be configured to automatically shut down during off hours. “Some run
as micro processes when called. We’re using serverless technologies and spot
instances in the AWS world, which are more efficient, and we’re building
systems that can handle it when those disappear,” Kirkland says. “Every
digital interaction has a carbon price, so figure out how to streamline that,”
advises Patel. This includes business process reengineering, as well as
addressing data storage and retention policies. For example, Capgemini engages
employees in sustainable IT by holding regular “digital cleaning days” that
include deleting or archiving email messages and cleaning up collaborative
workspaces.
SRE vs. DevOps? Successful Platform Engineering Needs Both
The complexity of managing today’s cloud native applications drains DevOps
teams. Building and operating modern applications requires significant amounts
of infrastructure and an entire portfolio of diverse tools. When individual
developers or teams choose to use different tools and processes to work on an
application, this tooling inconsistency and incompatibility causes delays and
errors. To overcome this, platform engineering teams provide a standardized
set of tools and infrastructure that all project developers can use to build
and deploy the app more easily. Additionally, scaling applications is
difficult and time-consuming, especially when traffic and usage patterns
change over time. Platform engineering teams address this with their golden
paths — or environments designed to scale quickly and easily — and logical
application configuration. Platform engineering also helps with reliability.
Development teams that use a set of shared tools and infrastructure tested for
interoperability and designed for reliability and availability make more
reliable software.
Zero Trust Model: The Best Way to Build a Robust Data Backup Strategy
A zero trust model changes your primary security principle from the age-old
axiom “trust but verify” to “never trust; always verify.” Zero trust is a
security concept that assumes any user, device, or application seeking access
to a network is not to be automatically trusted, even if it is within the
network perimeter. Instead, zero trust requires verification of every request
for access, using a variety of security technologies and techniques such as
multifactor authentication (MFA), least-privilege access, and continuous
monitoring. A zero trust environment provides many benefits, though it is not
without its flaws. Trust brokers are the central component of zero trust
architecture. They authenticate users’ credentials and provide access to all
other applications and services, which means they have the potential to become
a single point of failure. Additionally, some multifactor authentication
processes might cause users to wait a few minutes before allowing them to
login, which can hinder employee productivity. The location of trust brokers
can also create latency issues for users.
How to Manage Data as a Product
The way most organizations go about managing data is out of step with the way
people want to use data, says Wim Stoop, senior director of product marketing
at Cloudera. “If you want to get your teeth fixed or your appendix out you go
to an expert rather than a generalist,” he says. “The same should apply to the
data that people in organizations need.” However, most enterprises treat data
as a centralized and protected asset. It’s locked up in production
applications, data warehouses, and data lakes that are administered by a small
cadre of technical specialists. Access is tightly controlled, and few people
are aware of data the organization possesses outside of their immediate
purview. The drive towards organization agility has helped fuel interest in
the data mesh. “Individual teams that are responsible for data can iterate
faster in a well-defined construct,” Stoop says. “The shift to treating data
as a product breaks down siloes and gives data longevity because it’s clearly
defined, supported and maintained by the employees that know it
intimately.”
Preparing for the Worst: Essential IT Crisis Preparation Steps
Crisis preparation begins with planning -- outlining the steps that must be
taken in the event of a crisis, as well as procedures for data backup and
recovery, network security, communication with stakeholders, and employee
safety, says O’Brien, who founded the founded the Yale Law School Privacy Lab.
“Every organization should conduct regular drills and simulations to test the
effectiveness of their plan,” he adds. Every enterprise should appoint an
overall crisis management coordinator, an individual responsible for ensuring
that there’s a coordinated, updated, and rehearsed crisis management plan,
Glair advises. He also recommends creating a crisis management chain of
authority that’s ready to jump into action as soon as a crisis event occurs.
The crisis management coordinator may report directly to any of several
enterprise departments, including risk management, legal, operations, or even
the CIO or CFO. “The reporting location is not as important as the authority
the coordinator is granted to prepare and manage the crisis management
strategy,” he says.
How to make developers love security
Developers hate being slowed down or interrupted. Unfortunately, legacy
security testing systems often have long feedback loops that negatively impact
developer velocity. Whether it’s complex automated scans or asking the
security team to complete manual reviews, these activities are a source of
friction. They increase the delay between making a change and verifying its
effect. Security suites with many different tools can result in context
switching and multi-step mitigations. Additionally, tools aren’t always
equipped to find problems in older code, either. Only scanning the new changes
in your pipeline maximizes performance, but this can allow oversights to occur
as more vulnerabilities become known. Similarly, developers have to
refamiliarize themselves with old work whenever a vulnerability impacts it.
This is a cognitive burden that further increases the fix’s overall time and
effort. All too often, these problems add up to an inefficient security model
that prevents timely patches and consumes developers’ productive
hours.
Quote for the day:
"Incompetence annoys me.
Overconfidence terrifies me." -- Malcolm Gladwell
No comments:
Post a Comment