Verification Scans or Automated Security Requirements: Which Comes First?
Testing for weaknesses after code is written is reactive. A better approach is
to anticipate weaknesses before code is written and assign mitigation controls
as part of the development process. This is accomplished through security
requirements. Just as functional requirements provide teams with information on
the features and performance needed in a project, security requirements provide
teams with required controls to mitigate risk from potential weaknesses before
coding begins. Most of these weaknesses are predictable based on the regulatory
requirements in scope for the application along with the language, framework,
and deployment environment. By translating these into mitigation controls —
actionable tasks to be implemented by product development teams, security and
operations during the normal development process — teams can build more secure
software and avoid much of the “find and fix” delays they currently endure. With
complete security requirements and appropriate mitigation controls as part of
the overall project requirements, security is built-in as the application is
developed.
Software Engineers vs. Full-Stack Developers: 4 Key Differences
Both full-stack developers and software engineers must have a detailed knowledge
of coding languages. But full-stack developers tend to require a broader
knowledge of more advanced languages than a software engineer. This is because
of the range of areas they work across, from front-end development and core
application to back-end development. A full-stack developer’s responsibilities
include designing user interfaces or managing how an app functions, among other
e-commerce development essentials. But they’ll also work on back-end support for
the app, as well as manage databases and security. With such a varied list of
responsibilities, full-stack development often means overseeing a portfolio of
technology, reacting to needs with agility, and switching from one area to
another as and when required. A software engineer has a narrower, although no
less skilled remit. As well as their essential software development, they test
for and resolve programming errors, diving back into the code in order to debug
and often using QA automation to speed up testing.
Low-code speeds up development time, but what about testing time?
Test debt is exactly what it sounds like. Just like when you cannot pay your
credit card bill, when you cannot test your applications, the problems that are
not being found in the application continue to compound. Eliminating test debt
requires first establishing a sound test automation approach. Using this an
organization can create a core regression test suite for functional regression
and an end-to-end test automation suite for end-to-end business process
regression testing. Because these are automated tests they can be run as often
as code is modified. These tests can also be run concurrently, reducing the time
it takes to run these automated tests and also creating core regression test
suites. According to Rao, using core functional regression tests and end-to-end
regression tests are basic table stakes in an organization’s journey to higher
quality. Rao explained that when getting started with test automation, it can
seem like a daunting task, and a massive mountain that needs climbing. “You
cannot climb it in one shot, you have to get to the base camp. And the first
base camp should be like a core regression test suite, that can be achieved in a
couple of weeks, because that gives them a significant relief,” he said.
Scaling and Automating Microservice Testing at Lyft
Lyft built its service mesh using Envoy, ensuring that all traffic flows through
Envoy sidecars. When a service is deployed, it is registered in the service
mesh, becomes discoverable, and starts serving requests from the other services
in the mesh. An offloaded deployment contains metadata that stops the control
plane from making it discoverable. Engineers create offloaded deployments
directly from their pull requests by invoking a specialised GitHub bot. Using
Lyft's proxy application, they can add protobuf-encoded metadata to requests as
OpenTracing baggage. This metadata is propagated across all services throughout
the request's lifetime regardless of the service implementation language,
request protocol or queues in between. The Envoy's HTTP filter was modified to
support staging overrides and route the request to the offloaded instance based
on the request's override metadata. Engineers also used Onebox environments to
run integration tests via CI. As the number of microservices increased, so did
the number of tests and their running time. Conversely, its efficacy diminished
for the same reasons that led to Onebox's abandonment.
How decentralised finance is 'DeFi-ying' the norm
The DeFi sector has, to date, been based on the distributed ledger principle
of “trustlessness”, whereby users replace trust in an economic relationship
with an algorithm. DeFi is oversaturated with trustless applications, says
Sidney Powell, CEO and co-founder of Maple Finance. This includes
over-collateralised lending, whereby borrowers put up assets worth two or
three times the loan value, as well as decentralised exchanges and yield
aggregators, which put your money into a smart contract that searches for the
best yield from other smart contracts. “I think the opportunities are in areas
where there is a bit of human communication in transacting or using the
protocol,” Powell says. Maple’s model, which requires no collateral when it
matches lenders with institutional borrowers, requires applications to be
vetted and underwritten by experienced humans rather than code. From that
point on, however, it is based on transparency – lenders monitor who is
borrowing, the current lending strategy and pool performance in real
time.
Google tests its Privacy Sandbox and unveils new user controls
The Google Privacy Sandbox initiative is advancing in tandem with the growth
of the global data privacy software market, which researchers valued at $1.68
billion in 2021, and anticipate will reach $25.85 billion by 2029 as more
organizations attempt to get to grips with international data protection laws.
Google isn’t the only big tech provider attempting to innovate new solutions
to combat the complexity of data protection regulations. Meta’s engineers
recently shared some of the techniques the organization uses to minimize the
amount of data it collects on customers, including its Anonymous Credentials
Service (ACS), which enables the organization to authenticate users in a
de-identified manner without processing any personally identifiable
information. Likewise, just a year ago, Apple released the App Tracking
Transparency (ATT) framework as part of iOS 14, which forces Apple developers
to ask users to opt-in to cross-app tracking. Google Privacy Sandbox
Initiative’s approach stands out because it gives users more transparency over
the type of information collected on them, while giving them more granular
controls to remove interest-based data at will.
Upcoming Data Storage Technologies to Keep an Eye On
Technology, deployment model, and cross-industry issues are all contributing
to the evolution of data storage, according to Tong Zhang, a professor at the
Rensselaer Polytechnic Institute, as well as co-founder and chief scientist
for ScaleFlux. An uptick in new technologies and further acceleration in data
generation growth are also moving storage technologies forward. Deployment
models for compute and storage must evolve as edge, near-edge, and IoT devices
change the landscape of IT infrastructure landscape, he says. “Cross-industry
issues, such as data security and environmental impact / sustainability, are
also major factors driving data storage changes.” Four distinct factors are
currently driving the evolution in storage technology: cost, capacity,
interface speeds, and density, observes Allan Buxton, director of forensics at
data recovery firm Secure Data Recovery Services. Hard disk manufacturers are
competing with solid-state drive (SSD) makers by decreasing access and seek
times while offering higher storage capacities at a lower cost, he
explains.
JavaScript security: The importance of prioritizing the client side
In terms of the dangers, if an organization becomes the victim of a
client-side attack, they may not know it immediately, particularly if they’re
not using an automated monitoring and inspection security solution. Sometimes
it is an end-user victim (like a customer) that finds out first, when their
credit card or PII has been compromised. The impact of these types of
client-side attacks can be severe. If the organization has compliance or
regulatory concerns, then investigations and significant fines could result.
Other impacts include costs associated with attack remediation, operational
delays, system infiltration, and the theft of sensitive credentials or
customer data. There are long-term consequences, as well, such as reputation
damage and lost customers. ... Compliance is also a major concern.
Regulatory mandates like GDPR and HIPAA, as well as regulations specific to
the financial sector, mean that governments are putting a lot of pressure on
organizations to keep sensitive user information safe. Failing to do so can
mean investigations and substantial fines.
Lock-In in the Age of Cloud and Open Source
The cloud can be thought of in three layers: Infrastructure as a Service
(IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). While
IaaS can be thought of as renting hardware in the cloud, PaaS and SaaS need to
be thought of in a completely different way (Hardware 1.0 vs. Hardware 2.0).
Migrating between services for IaaS is relatively straightforward, and a buyer
is fairly well protected from vendor lock-in. Services higher up the stack,
not so much. It remains to be seen if the cloud providers will actually win in
the software world, but they are definitely climbing up the stack, just like
the original hardware vendors did, because they want to provide stickier
solutions to their customers. Let’s explore the difference between these
lower-level and higher-level services from a vendor lock-in perspective. With
what I call Hardware 2.0, servers, network and storage are rented in the cloud
and provisioned through APIs. The switching costs of migrating virtual
machines from one cloud provider equate to learning a new API for
provisioning.
What is autonomous AI? A guide for enterprises
Autonomous artificial intelligence is defined as routines designed to allow
robots, cars, planes and other devices to execute extended sequences of
maneuvers without guidance from humans. The revolution in artificial
intelligence (AI) has reached a stage when current solutions can reliably
complete many simple, coordinated tasks. Now the goal is to extend this
capability by developing algorithms that can plan ahead and build a multistep
strategy for accomplishing more. Thinking strategically requires a different
approach than many successful well-known applications for AI. Machine vision
or speech recognition algorithms, for instance, focus on a particular moment
in time and have access to all of the data that they might need. Many
applications for machine learning work with training sets that cover all
possible outcomes. ... Many autonomous systems are able to work quite well by
simplifying the environment and limiting the options. For example, autonomous
shuttle trains have operated for years in amusement parks, airports and other
industrial settings.
Quote for the day:
"Leadership is about change... The
best way to get people to venture into unknown terrain is to make it
desirable by taking them there in their imaginations." --
Noel Tichy
No comments:
Post a Comment