Scrumfall: When Agile becomes Waterfall by another name
Agile is supposed to be centered on people, not processes — on people
collaborating closely to solve problems together in a culture of autonomy and
mutual respect, a sustainable culture that values the health, growth, and
satisfaction of every individual. There is a faith embedded in the manifesto
that this approach to software engineering is both necessary and superior to
older models, such as Waterfall. Necessary because of the inherent complexity
and indeterminacy of software engineering. Superior because it leverages the
full collaborative might of everyone’s intelligence. But this is secondary to
Agile’s most fundamental idea: We value people. It’s a rare employer today who
doesn’t pay lip service to that idea. “We value our people.” But many businesses
instead prioritize controlling their commodity human resources. This now being
unacceptable to say out loud — in software engineering circles as in much of
modern America — many companies have dressed it up in Scrum’s clothing, claiming
Agile ideology while reasserting Waterfall’s hierarchical micromanagement.
Nerd Cells, ‘Super-Calculating’ Network in the Human Brain Discovered
After five years of research into the theory of the continuous attractor
network, or CAN, Charlotte Boccara and her group of scientists at the Institute
of Basic Medical Sciences at the University of Oslo, now at the Center for
Molecular Medicine Norway (NCMM), have made a breakthrough. “We are the first to
clearly establish that the human brain actually contains such ‘nerd cells’ or
‘super-calculators’ put forward by the CAN theory. We found nerve cells that
code for speed, position and direction all at once,” says Boccara. ... The CAN
theory hypothesizes that a hidden layer of nerve cells perform complex math and
compile vast amounts of information about speed, position and direction, just as
NASA’s scientists do when they are adjusting a rocket trajectory. “Previously,
the existence of the hidden layer was only a theory for which no clear proof
existed. Now we have succeeded in finding robust evidence for the actual
existence of such a brain’s ‘nerd center,'” says the researcher,—and as such we
fill in a piece of the puzzle that was missing.
Data Center Sustainability Using Digital Twins And Seagate Data Center Sustainability
Rozmanith said that Dessault’s digital twins data center construction simulation
reduced time to market by 15%. He also said that the modular approach reduces
design time by 20%. Their overall goal is to shorten data center stand-up time
by 50% and reduce the waste commonly generated in data center construction. Even
after construction, digital twins for the operation of a data center will be
useful for evaluating and planning future upgrades and data center changes. Some
data center companies, such as Apple have designed their data centers to be 100%
sustainable for several years. Seagate recently announced that it would power
its global footprint with 100% renewable energy by 2030 and achieve carbon
neutrality by 2040. These goals were announced in conjunction with the release
of the company’s 16th Global Citizenship Annual Report. That report included a
look at the company’s annual progress towards meeting emission reduction
targets, product stewardship, talent enablement, diversity goals, labor
standards, fair trade, supply chain, and more.
Industry 4.0 – why smart manufacturing is moving closer to the edge
With Industry 4.0, new technologies are being built into the factory to drive
increased automation. This all leads to potentially smart factories that can,
for instance, benefit from predictive maintenance, as well as improved quality
assurance and worker safety. At the same time, existing data challenges can be
overcome. Companies operating across multiple locations often struggle to remove
data silos and bring IT and OT (operational technology) together. An edge based
on an open hybrid infrastructure can help them do this, as well as solving other
problems. These problems include reducing latency as a result of supporting a
horizontal data framework across the organization's entire IT infrastructure,
instead of relying on data being funneled through a centralized network that can
cause bottlenecks. Edge computing opens hybrid-aligned to cloud services can
also reduce the amount of mismatched and inefficient hardware that has gradually
built up, and which is located in often tight remote spaces too.
Digital twins: The art of the possible in product development and beyond
Digital twins are increasingly being used to improve future product
generations. An electric-vehicle (EV) manufacturer, for example, uses live
data from more than 80 sensors to track energy consumption under different
driving regimes and in varying weather conditions. Analysis of that data
allows it to upgrade its vehicle control software, with some updates
introduced into new vehicles and others delivered over the air to existing
customers. Developers of autonomous-driving systems, meanwhile, are
increasingly developing their technology in virtual environments. The training
and validation of algorithms in a simulated environment is safer and cheaper
than real-world tests. Moreover, the ability to run numerous simulations in
parallel has accelerated the testing process by more than 10,000 times. ...
The adoption of digital twins is currently gaining momentum across industries,
as companies aim to reap the benefits of various types of digital twins. Given
the many different shapes and forms of digital twins, and the different
starting points of each organization, a clear strategy is needed to help
prioritize where to focus digital-twin development and what steps to take to
capture the most value.
What Is Cloud-Native?
Cloud-native, according to most definitions, is an approach to software
design, implementation, and deployment that aims to take full advantage of
cloud-based services and delivery models. Cloud-native applications also
typically operate using a distributed architecture. That means that
application functionality is broken into multiple services, which are then
spread across a hosting environment instead of being consolidated on a single
server. Somewhat confusingly, cloud-native applications don't necessarily run
in the cloud. It's possible to build an application according to cloud-native
principles and deploy it on-premises using a platform such as Kubernetes,
which mimics the distributed, service-based delivery model of cloud
environments. Nonetheless, most cloud-native applications run in the cloud.
And any application designed according to cloud-native principles is certainly
capable of running in the cloud. ... Cloud-native is a high-level concept
rather than a specific type of application architecture, design, or delivery
process. Thus, there are multiple ways to create cloud-native software and a
variety of tools that can help do it.
Predictive Analytics Could Very Well Be The Future Of Cybersecurity
Predictive analytics is gaining momentum in every industry, enabling
organizations to streamline the way they do business. This branch of advanced
analytics is concerned with the use of data, statistical algorithms, and
machine learning to determine future performance. When it comes to data
breaches, predictive analytics is making waves. Enterprises with a limited
security staff can stay safe from intricate attacks. Predictive analytics
tells them where threat actors tried to attack in the past, so it helps to see
where they’ll strike next. Good security starts with knowing what attacks are
to be feared. The conventional approach to fighting cybercrime is collecting
data about malware, data breaches, phishing campaigns, and so on. Relevant
information is extracted from those signatures. By signatures, it’s meant a
one-of-a-kind arrangement of information that can be used to identify a
cybercriminal’s attempt to exploit an operating system or an app’s
vulnerability. The signatures can be compared against files, network traffic,
and emails that flow in and out of the network to detect abnormalities.
Everyone has distinct usage habits that technology can learn.
A Shift in Computer Vision is Coming
Neuromorphic technologies are those inspired by biological systems, including
the ultimate computer, the brain and its compute elements, the neurons. The
problem is that no–one fully understands exactly how neurons work. While we know
that neurons act on incoming electrical signals called spikes, until relatively
recently, researchers characterized neurons as rather sloppy, thinking only the
number of spikes mattered. This hypothesis persisted for decades. More recent
work has proven that the timing of these spikes is absolutely critical, and that
the architecture of the brain is creating delays in these spikes to encode
information. Today’s spiking neural networks, which emulate the spike signals
seen in the brain, are simplified versions of the real thing — often binary
representations of spikes. “I receive a 1, I wake up, I compute, I sleep,”
Benosman explained. The reality is much more complex. When a spike arrives, the
neuron starts integrating the value of the spike over time; there is also
leakage from the neuron meaning the result is dynamic. There are also around 50
different types of neurons with 50 different integration profiles.
Implementing a Secure Service Mesh
One of our main goals with using a service mesh was to get Mutual Transport
Layer Security (mTLS) between internal pod services for security. However, using
a service mesh provides many other benefits because it allows workloads to talk
between multiple Kubernetes clusters or run 100% bare-metal apps connected to
Kubernetes. It offers tracing, logging around connections between pods, and it
can output connection endpoint health metrics to Prometheus. This diagram shows
what a workload might look like before implementing a service mesh. In the
example on the left, teams are spending time building pipes instead of building
products or services, common functionality is duplicated across services, there
are inconsistent security and observability practices, and there are black-box
implementations with no visibility. On the right, after implementing a service
mesh, the same team can focus on building products and services. They’re able to
build efficient distributed architectures that are ready to scale, observability
is consistent across multiple platforms, and it’s easier to enforce security and
compliance best practices.
5 Must-Have Features of Backup as a Service For Hybrid Environments
New backup as a service offerings have redefined backup and recovery with the
simplicity and flexibility of the cloud experience. Cloud-native services can
eliminate complexity of protecting your data and free you from the day-to-day
hassles of managing the backup infrastructure. The innovative approach to backup
lets you meet SLAs in hybrid cloud environments, and simplifies your
infrastructure, driving significant value for your organization. Resilient data
protection is key to always-on availability for data and applications in today’s
changing hybrid cloud environments. While every organization has its own set of
requirements, I would advise you to focus on cost efficiency, simplicity,
performance, scalability, and future-readiness when architecting your strategy
and evaluating new technologies. The simplest choice: A backup as a service
solution that integrates all of these features in a pay-as-you-go consumption
model. Modern solutions are architected to support today’s challenging IT
environments.
Quote for the day:
"Leadership is like beauty; it's hard
to define, but you know it when you see it." -- Warren Bennis
No comments:
Post a Comment