Effective Software Testing – A Developer’s Guide
When there are decisions depending on multiple conditions (i.e. complex
if-statements), it is possible to get decent bug detection without having to
test all possible combinations of conditions. Modified condition/decisions
coverage (MC/DC) exercises each condition so that it, independently of all the
other conditions, affects the outcome of the entire decision. In other words,
every possible condition of each parameter must influence the outcome at least
once. The author does a good job of showing how this is done with an example. So
given that you can check the code coverage, you must decide how rigorous you
want to be when covering decision points, and crate test cases for that. The
concept of boundary points is useful here. For a loop, it is reasonable to at
least test when it executes zero, one and many times. It can seem like it should
be enough to just do structural testing, and not bother with specification based
testing, since structural testing makes sure all the code is covered. However,
this is not true. Analyzing the requirements can lead to more test cases than
simply checking coverage. For example, if results are added to a list, a test
case adding one element will cover all the code.
Inconsistent thoughts on database consistency
While linearizability is about a single piece of data, serializability is about
multiple pieces of data. More specifically, serializability is about how to
treat concurrent transactions on the same underlying pieces of data. The
“safest” way to handle this is to line up transactions in the order they were
arrived and execute them serially, making sure that one finishes before the next
one starts. In reality, this is quite slow, so we often relax this by executing
multiple transactions concurrently. However, there are different levels of
safety around this concurrent execution, as we’ll discuss below. Consistency
models are super interesting, and the Jepsen breakdown is enlightening. If I had
to quibble, it’s that I still don’t quite understand the interplay between the
two poles of consistency models. Can I choose a lower level of
linearizability along with the highest level of serializability? Or does
the existence of any level lower than linearizable mean that I’m out of the
serializability game altogether? If you understand this, hit me up! Or better
yet, write up a better explanation than I ever could :). If you do, let me know
so I can link it here.
AI and How It’s Helping Banks to Lower Costs
Using AI helps banks lower the costs of predicting future trends. Instead of
hiring financial analysts to analyze data, AI is used to organize and present
data that the banks can use. They can get real-time data to analyze behaviors,
predict future trends, and understand outcomes. With this, banks can get more
data that, in turn, helps them make better predictions. ... Another advantage
of using AI in the banking industry is that it reduces human errors. By
reducing errors, banks prevent loss of revenue caused by these errors.
Moreover, human errors can lead to financial data breaches. When this happens,
critical data may get exposed to criminals. They can use the stolen data to
use clients’ identities for fraudulent activities. Especially with a high
volume of work, employees cannot avoid committing errors. With the help of AI,
banks can reduce a variety of errors. ... AI helps banks save money by
detecting fraudulent payments. Without AI, banks may lose millions because of
criminal activities. But thanks to AI, banks can prevent such losses as the
technology can analyze more than one channel of data to detect fraud.
Is NoOps the End of DevOps?
NoOps is not a one-size-fits-all solution. You know that it’s limited to apps
that fit into existing serverless and PaaS solutions. Since some enterprises
still run on monolithic legacy apps (requiring total rewrites or massive
updates to work in a PaaS environment), you’d still need someone to take care
of operations even if there’s a single legacy system left behind. In this
sense, NoOps is still a way away from handling long-running apps that run
specialized processes or production environments with demanding applications.
Conversely, operations occurs before production, so, with DevOps, operations
work happens before code goes to production. Releases include monitoring,
testing, bug fixes, security, and policy checks on every commit, and so on.
You must have everyone on the team (including key stakeholders) involved from
the beginning to enable fast feedback and ensure automated controls and tasks
are effective and correct. Continuous learning and improvement (a pillar of
DevOps teams) shouldn’t only happen when things go wrong; instead, members
must work together and collaboratively to problem-solve and improve systems
and processes.
How IT Can Deliver on the Promise of Cloud
While many newcomers to the cloud assume that hyperscalers will handle most of
the security, the truth is they don’t. Public cloud providers such as AWS,
Google, and Microsoft Azure publish shared responsibility models that push
security of the data, platform, applications, operating system, network and
firewall configuration, and server-side encryption, to the customer. That’s a
lot you need to oversee with high levels of risk and exposure should things go
wrong. Have you set up ransomware protection? Monitored your network
environment for ongoing threats? Arranged for security between your workloads
and your client environment? Secured sets of connections for remote client
access or remote desktop environments? Maintained audit control of open source
applications running in your cloud-native or containerized workloads? These
are just some of the security challenges IT faces. Security of the cloud
itself – the infrastructure and storage – fall to the service providers. But
your IT staff must handle just about everything else.
Distributed Caching on Cloud
Caching is a technique to store the state of data outside of the main storage
and store it in high-speed memory to improve performance. In a microservices
environment, all apps are deployed with their multiple instances across various
servers/containers on the hybrid cloud. A single caching source is needed in a
multicluster Kubernetes environment on cloud to persist data centrally and
replicate it on its own caching cluster. It will serve as a single point of
storage to cache data in a distributed environment. ... Distributed caching is
now a de-facto requirement for distributed microservices apps in a distributed
deployment environment on hybrid cloud. It addresses concerns in important use
cases like maintaining user sessions when cookies are disabled on the web
browser, improving API query read performance, avoiding operational cost and
database hits for the same type of requests, managing secret tokens for
authentication and authorization, etc. Distributed cache syncs data on hybrid
clouds automatically without any manual operation and always gives the latest
data.
Bridging The Gap Between Open Source Database & Database Business
It is relatively easy to get a group of people that creates a new database
management system or new data store. We know this because over the past five
decades of computing, the rate of proliferation of tools to provide structure to
data has increased, and it looks like at an increasing rate at that. Thanks in
no small part to the innovation by the hyperscalers and cloud builders as well
as academics who just plain like mucking around in the guts of a database to
prove a point. But it is another thing entirely to take an open source database
or data store project and turn it into a business that can provide
enterprise-grade fit and finish and support a much wider variety of use cases
and customer types and sizes. This is hard work, and it takes a lot of people,
focus, money – and luck. This is the task that Dipti Borkar, Steven Mih, and
David Simmen took on when they launched Ahana two years ago to commercialize the
PrestoDB variant of the Presto distributed SQL engine created by Facebook, and
no coincidentally, it is a similar task that the original creators of Presto
have taken on with the PrestoSQL, now called Trinio, variant of Presto that is
commercialized by their company, called Starburst.
Data gravity: What is it and how to manage it
Examples of data gravity include applications and datasets moving to be closer
to a central data store, which could be on-premise or co-located. This makes
best use of existing bandwidth and reduces latency. But it also begins to limit
flexibility, and can make it harder to scale to deal with new datasets or adopt
new applications. Data gravity occurs in the cloud, too. As cloud data stores
increase in size, analytics and other applications move towards them. This takes
advantage of the cloud’s ability to scale quickly, and minimises performance
problems. But it perpetuates the data gravity issue. Cloud storage egress fees
are often high and the more data an organisation stores, the more expensive it
is to move it, to the point where it can be uneconomical to move between
platforms. McCrory refers to this as “artificial” data gravity, caused by cloud
services’ financial models, rather than by technology. Forrester points out that
new sources and applications, including machine learning/artificial intelligence
(AI), edge devices or the internet of things (IoT), risk creating their own data
gravity, especially if organisations fail to plan for data growth.
CIOs Must Streamline IT to Focus on Agility
“Streamlining IT for agility is critical to business, and there’s not only
external pressure to do so, but also internal pressure,” says Stanley Huang,
co-founder and CTO at Moxo. “This is because streamlining IT plays a strategic
role in the overall business operations from C-level executives to every
employee's daily efforts.” He says that the streamlining of business processes
is the best and most efficient way to reflect business status and driving power
for each departmental planning. From an external standpoint, there is pressure
to streamline IT because it also impacts the customer experience. “A connected
and fully aligned cross-team interface is essential to serve the customer and
make a consistent end user experience,” he adds. For business opportunities
pertaining to task allocation and tracking, streamlining IT can help align
internal departments into one overall business picture and enable employees to
perform their jobs at a higher level. “When the IT system owns the source of
data for business opportunities and every team’s involvement, cross team
alignment can be streamlined and made without back-and-forth communications,”
Huang says.
Open Source Software Security Begins to Mature
Despite the importance of identifying vulnerabilities in dependencies, most
security-mature companies — those with OSS security policies — rely on industry
vulnerability advisories (60%), automated monitoring of packages for bugs (60%),
and notifications from package maintainers (49%), according to the survey.
Automated monitoring represents the most significant gap between security-mature
firms and those firms without a policy, with only 38% of companies that do not
have a policy using some sort of automated monitoring, compared with the 60% of
mature firms. Companies should add an OSS security policy if they don't have
one, as a way to harden their development security, says Snyk's Jarvis. Even a
lightweight policy is a good start, he says. "There is a correlation between
having a policy and the sentiment of stating that development is somewhat
secure," he says. "We think having a policy in place is a reasonable starting
point for security maturity, as it indicates the organization is aware of the
potential issues and has started that journey."
Quote for the day:
"No great manager or leader ever fell
from heaven, its learned not inherited." -- Tom Northup
No comments:
Post a Comment