Exploring Edge Computing as a Complement to the Cloud
The acceleration of how organizations use edge computing may lead to new
possibilities in cloud computing. “[The edge] is a complement to public cloud
and your private data center,” said Wen Temitim, CTO for StackPath. “It’s not
replacing public cloud.” He said the edge can be where network-sensitive
applications run, for example. That will require sizing up how those
applications might run differently based on traffic flows and how much of the
population needs to be served, Temitin said. “The biggest challenge is
rethinking that application architecture.” The first step will be to identify
components of the application that need to evolve to run at the edge, he said.
The definition of the edge can be relative, Temitim said. For example,
hyperscalers and organizations may have a data center-focused edge. Others may
see the edge as the collection of Tier 1 carrier hotels where different
companies interconnect. Price said Cisco sees different slices of what the
edge is; his primary portfolio item is a control center for cellular
enablement and management for more than 150 million devices. “The edge is
really the devices connecting to the cellular network and managing traffic
flows from those customers,” Price said.
Why a Crisis Calls for Bulletproofing Your Applications and Infrastructure
It’s not a time to be taking chances, because any time you have rapid changes
in demand, there are a set of ripple effects to other applications — you may
get the noisy neighbor effect. In the era of virtualization, you may have had
100 percent of your virtual CPU allocated to your application. If, however,
the underlying physical CPU resources were also shared with an unmonitored
compute resource hog, your applications would be negatively impacted. The same
concepts hold true today, but now it’s writ large in the cloud. Capacity,
on-demand cloud architectures are only as performant as the underlying
guaranteed, actual resource. Over-size and over-reserve that, and you will
waste money. Under-size and you will impact performance — thus, the importance
of monitoring everything, all the time. You’ve got extra demands, and it’s
impacting your shared infrastructures. The potential for noisy neighbors goes
up as you increase your number of apps. It becomes even more imperative to
monitor.
Data Monetization: New Value Streams You Need Right Now
Data and insights don’t have to be sold or exchanged directly. Sometimes
baking data or analytics into one of your existing products or services
instead can bolster their competitiveness, benefits, and a price premium . For
example, a forecasting tool that has access to external datasets such as open
data, syndicated data, social media streams and web content, and can
automatically generate leading indicators of business performance, will set
itself apart from stand-alone “dumb” forecasting tools that consider only a
company’s own transaction history. Another good example are IoT-enabled
automobile components that continually integrate data collected from other
automobiles and drivers, and which can tune their own performance and/or
prolong its lifespan. Rather than merely infusing existing products or
services with data, go a step further and digitalize them altogether. For
example, Kaiser Permanente implemented secure messaging, image sharing, video
consultations and mobile apps, and now has more virtual patient visits than
in-person doctor visits in some geographies. In addition, it and can get
patients with specialists quicker than ever, and 90 percent of physicians say
this digitalization has allowed them to provide higher-quality care for their
patients. Digitalizing solutions often requires the wholesale redesign of
products, services, processes and customer journeys to integrate and take
advantage of data.
Refactor vs. rewrite: Deciding what to do with problem software
When a programmer refactors software, the goal is to improve the internal
structure of the code without altering its external behavior. For example,
developers remove redundant code or break a particularly task-heavy
application component into several objects, each with a single responsibility.
The Extreme Programming development approach, a concept known as merciless
refactoring, stresses the need to continuously refactor code. Theoretically,
programmers who refactor continuously make sections of code more attractive
with every change. Refactored code should be easily understood by other
people, so developers can turn code that scares people into code that people
can trust and feel comfortable updating on their own. ... Rather than read and
analyze complex, ugly code for refactoring, programmers can opt to just write
new code altogether. Unlike refactoring, code rewrites sound relatively
straightforward, since the programmers just start over and replace the
functionality. However, it isn't nearly that simple. To successfully rewrite
software, developers should form two teams: one that maintains the old
application and another that creates the new one.
How IT modern operational services enables self-managing, self-healing, and self-optimizing
The classic issue is when there’s a problem, the finger-pointing or blame-game
starts. Even triaging and isolating problems in these environments can be a
challenge, let alone the expertise to fix the issue. The more vendors you work
with the more dimensions you have to manage. And the classic issue, as you
point out, is when there’s a problem, the finger-pointing or the blame-game
starts. Even triaging and isolating problems in these types of environments
can be a challenge, let alone having the expertise to fix the issue. Whether
it’s in the hardware, software layer, or on somebody else’s platform, it’s
difficult. Most vendors, of course, have different service level agreements
(SLAs), different role names, different processes, and different contractual
and pricing structures. So, the whole engagement model, even the vocabulary
they use, can be quite different; ourselves included, by the way. So, the more
vendors you have to work with, the more dimensions you have to manage. And
then, of course, COVID-19 hits and our customers working with multiple vendors
have to rely on how all those vendors are reacting to the current climate. And
they’re not all reacting in a consistent fashion.
You don’t need SRE like Google. You need your own SRE.
You are not replacing your current Ops team, your sys admins with software
Engineers. You need your ops team. They know how your custom built
infrastructure and systems work. They know its idiosyncrasies. They know when
Chicago opens to a ticket to say they are offline again, its the network. Yes,
its always the network, but the sysadmins know who to ping at Equinix to get
it restored pronto. They know how the option trade desk system slows to a
grind on Expiration Friday and you just ignore those tickets from traders that
day. And even if you wanted to get rid of all the sys admins, can you afford
to hire that many software engineers to replace them all? You can barely fill
all your open slots on the dev teams. What you need to do is complement your
Ops teams with software engineers who can understand what the teams do day-in,
day-out and what tasks are repetitive and typical, and then they can develop
tools for automated remediation. These software engineers should be embedded
in the ops team, not a separate team on the outside. Think Squads.
Overcome Privacy Shaming During and After Pandemic
Crises elevate the demand for data while increasing the risk of data misuse.
Data and analytics leaders can overcome the inherent reluctance around data
sharing by developing trusted internal and external data sharing programs. One
of the ways to do this is to combat privacy shaming. The hype around third
parties encroaching on individual data protection is oftentimes played out
through that emotional tactic of privacy shaming to deter and even stop data
sharing. Often this leads to a one-size-fits-all mentality that data sharing
is bad. Data and analytics leaders must overcome this reluctance by aligning
privacy practices with business value resiliency, while maximizing societal
benefit. Champion a new culture around data sharing that illustrates how
applying privacy awareness to decisions involving personal data sharing
creates value. Change the emotional response of privacy shaming to one that is
grounded in a proper understanding of organizational data protection
requirements and policies. Armed with such knowledge, enterprise leaders will
be able to better communicate what privacy is and is not. More importantly,
they’ll be better able to convey the need to balance personal data rights with
the freedom to conduct business and be innovative to solve complex challenges
like coronavirus.
REST API Security Vulnerabilities
Authentication attacks are processes with which a hacker attempts to exploit the
authentication process and gain unauthorized access. Bypass attack, brute-force
attack (for passwords), verify impersonation, and reflection attack are a few
types of authentication attacks. Basic authentication, authorization with
default keys, and authorization with credentials are a few protection measures
to safeguard our APIs. Cross-site scripts, also known as an XSS attack, is the
process of injecting malicious code as part of the input to web services,
usually through the browser to a different end-user. The malicious script, once
injected, can access any cookies, session tokens, or sensitive information
retained by the browser, or even it can masquerade the whole content of the
rendered pages, XSS categorizes into server-side XSS and client-side XSS.
Traditionally, XSS consist of three types; they are Reflected XSS, Stored XSS,
and DOM XSS. Cross-site request forgery, also known as CSRF, sea-surf, or
XSRF, is a vulnerability that web applications expose a possibility of the end
user forced (by forged links, emails, HTML pages) to execute unwanted actions on
a currently authenticated session.
The World’s Best Banks: The Future Of Banking Is Digital After Coronavirus
“Banking has changed irrevocably as a result of the pandemic. The pivot to
digital has been supercharged,” says Jane Fraser, president of Citigroup and
CEO of its gigantic consumer bank. “We believe we have the model of the future
– a light branch footprint, seamless digital capabilities and a network of
partners that expand our reach to hundreds of millions of customers.” ... If
there’s any doubt that digital-first banks are the way forward, Velez offers a
surprising statistic: Since the pandemic began, Nubank has seen a surge in
customers aged sixty and over, the types of clients many bankers once believed
would never leave traditional branch networks. Over the past 30-days, for
instance, some 300 clients above the age of 90 have become Nubank customers.
Digital banks rated well in the United States as well. Online-only Discover
and Capital One ranked #23 and #30, while neobank Chime ranked #36. All three
beat out mega-lenders JPMorgan Chase, #36 and Citigroup #71. The other big
four lenders, Bank of America and Wells Fargo, didn’t make the top-75.
Facilitating Threat Modelling Remotely
To aid in prioritisation of mitigations, Gumbley suggested dot-voting on the
threats which have been identified. He suggested that this would "yield good
risk decisions for low investment, reflecting the diverse perspectives in the
group." Handova wrote that there is "no one-size-fits-all DevSecOps process"
across enterprises and development teams. For every risk, he wrote that teams
need to "provide the appropriate level of security assurance and security
coverage." Handova wrote that this decision may determine whether any resulting
investment in security testing will determine the response. That is, whether it
is "automated within the DevOps workflow, performed out of band or some
combination of the two." Handova cautioned the need to minimise "friction for
developers," in order to avoid bypassing security in favour of "expediting
coding activities." He wrote of the value of catching security issues "earlier
and more easily while developers are still thinking about the code." Handova’s
focus was on using test automation to mitigate "the risk of developers not
remembering the code context at a later date."
Quote for the day:
No comments:
Post a Comment