Data is Risky Business: The Data Crisis Unmasked
Allegations have emerged of different jurisdictions (such as the State of
Georgia in the US) adjusting graphs to create a visual of a downward trend by
putting time series data out of sequence. Florida removed the data scientist
who was running their COVID-19 data reporting from her role, despite the
reporting having been praised for its transparency. In addition, we have the
ethical issues of data-driven responses to managing the pandemic, from contact
tracing applications to thermal scanning. Deploying technologies such as these
requires a balancing of privacy and public interest, but also requiring
objective rigour in assessing whether the technology will actually work for
the purpose for which they are proposed. For example, thermal cameras sound
like great idea. Anyone registering a temperature over 38 degrees Celsius can
be denied entry to the building to keep everyone safe. Only, what do you do
about false positives and negatives with those technologies? Are there other
things that might cause someone to run a high skin temperature (for that is
what they measure) from time to time? Any hormonal conditions? Do any of your
staff cycle or run to the office? Is there anything that could be done by a
malicious actor (or an overly diligent staff member) that could fake out the
scanner by suppressing their temperature, like taking paracetamol?
Fighting Defect Clusters in Software Testing
Using metrics like defect density charts or module-wise defect counts, we can
examine the history of defects that have been found and look for areas,
modules or features with higher defect density. This is where we should begin
our search for defect clusters. Spending more time testing these areas may
lead us to more defects or more complex use cases to try out. ... Defect
clustering follows the Pareto rule that 80% of the defects are caused by 20%
of the modules in the software. It’s imperative for a tester to know which 20%
of modules have the most defects so that the maximum amount of effort can be
spent there. That way, even if you don’t have a lot of time to test, hopefully
you can still find the majority of defects. Once you know the defect
cluster areas, testers can focus on containing the defects in their product in
a number of ways.By knowing which features or modules contain most defects,
testers can spend more effort in finding better ways to test it. They can
include more unit tests and integration tests for that module. Testers can
also write more in-depth test scenarios with use cases from the customers on
how the feature is best used in production. Focusing on test data and creating
more exhaustive combinatorial tests for variables can also lead to finding
more computational or algorithmic defects sooner.
How to Design For Panic Resilience in Rust
It is always better to exit with an error code than to panic. In the best
situation, no software you write will ever panic. A panic is a controlled
crash, and must be avoided to build reliable software. A crash is not ever
“appropriate” behavior, but it’s better than allowing your system to cause
physical damage. If at any time it may be believed that the software could
cause something deadly, expensive, or destructive to happen, it’s probably
best to shut it down. If you consider your software a car driving at about 60
miles per hour, a panic is like hitting a brick wall. A panic unwinds the
call stack, hopping out of every function call and returning from program
execution, destroying objects as it goes. It is not considered a safe nor
clean shutdown. Avoid panics. The best way to end a program’s execution is to
allow it to run until the last closing brace. Somehow, some way, program for
that behavior. It allows all objects to safely destroy themselves. See the
Drop trait. ... Creating custom error types is valuable. When you use a bare
enum as an error type, the data footprint can be tiny.
Good Business Processes Are Key to Resilience During Disruption
With current and future operational challenges bogging down company
leadership, empowering all employees with better process management skills and
resources ensures your whole organization, from frontline employees to the
C-suite, has better visibility and control over responsibilities. Process
mapping and oversight needs to be a priority for all businesses today, and
preparing teams with tools and resources must come from the top down.
Companies must act with a renewed sense of urgency to develop dynamic
processes and create stable growth environments during uncertain times. To
begin sharing operational knowledge and ownership across your organization,
leaders must first identify, define and map key processes across their
organization. Companies are no longer in a position to avoid process
understanding. Work-related implications of COVID-19 emphasize our need for a
renewed focus on process, as inefficient communications and workflows can make
or break delivery of your product or service. The teams that will be most
successful in the next year are prepared to execute on innovative ideas and
adapt willingly.
Shift Your Cybersecurity Mindset to Maintain Cyber Resilience
As more companies expand their remote workforce, the number of endpoints with
access to corporate resources is proliferating. Hackers are seizing the
opportunities this presents: Phishing email click rates have risen from around
5 percent to over 40 percent in recent months, according to Forbes. With a
strong cybersecurity mindset and some strategic planning, your company can
position itself to survive these new working conditions and build up even more
cyber resilience as you adapt. Because cybersecurity professionals are facing
formidable adversaries, understanding how hackers think can go a long way in
mitigating the threat they pose. Security expert Frank Abagnale is one of the
foremost experts on the thought processes of threat actors, and he was kind
enough to lend his expertise to this piece. Since the number of successful
phishing attacks has skyrocketed, I asked him if this is more a function of
hackers stepping up their game, or employees not possessing the right
cybersecurity mindset to pay attention. “It’s both,” he explained. “Any crisis
is a perfect backdrop to phishing attacks.
Software Testing Survival Tips
Software test automation is the practice of automating test procedures of the
application under test (AUT). There are two main testing aspects: functional
and performance. Functional testing is oriented to validating AUT errors
triggered by its functionality -- primarily front-end (UI and API). As an
example, clicking the login button in the AUT should navigate to the welcome
page. Performance testing involves validating overall performance and
scalability of the system under load (SUL) by measuring end-to-end transaction
time for data passed from the SUL front end to the back end and the
sustainability of the SUL per number of users. Going back to our original
example, a login transaction can take one second for 1,000 users. ... Many
companies have been doing TA for the past few years, but with fewer TA
engineers, they can't catch up with the regression testing schedule. If you
are in this same boat, you need to look into your TA architecture. The key of
high-ROI TA is the minimization of artifacts (i.e., the number of TA scripts,
the size of TA scripts, the lines of code (steps), cross-platform portability,
and the shareability of TA foundation components such as data sources,
functional libraries and test objects.
Suspected Hacker Faces Money Laundering, Conspiracy Charges
The conspiracy charge carries a possible five-year federal prison sentence and
a $250,000 fine, and the money laundering charge carries a possible 20-year
sentence and a $500,000 fine, according to the Justice Department. Over the
course of several years, the FBI alleges, Antonenko and two unnamed
co-conspirators targeted vulnerable computer networks in order to steal credit
and debit card numbers, expiration dates and other information, according to
the federal indictment unsealed this week. In addition, the three allegedly
stole personally identifiable information from victims, federal prosecutors
say. After selling this data, Antonenko and the other two co-conspirators used
bitcoin, as well as banks, to allegedly launder money and hide the proceeds,
according to the indictment. Edward V. Sapone, a New York-based attorney
representing Antonenko, told Information Security Media Group: "While a
colossal amount of information has been released, size doesn't matter. Often
times, big criminal cases turn on one fact. Here, the facts have not been
tested, as Mr. Antonenko hasn't even been arraigned on an indictment.
Understanding cyber threats to APIs
API access is often loosely-controlled, which can lead to undesired exposure.
Ensuring that the correct set of users has appropriate access permissions for
each API is a critical security requirement that must be coordinated with
enterprise identity and access management (IAM) systems. In some environments,
as much as 90% of the respective application traffic (e.g., account login or
registration, shopping cart checkout) is generated by automated bots.
Understanding and managing traffic profiles, including differentiating good
bots from bad ones, is necessary to prevent automated attacks without blocking
legitimate traffic. Effective complementary measures include implementing
whitelist, blacklist, and rate-limiting policies, as well as geo-fencing
specific to use-cases and corresponding API endpoints. APIs simplify attack
processes by eliminating the web form or the mobile app, thus allowing a bad
actor to more easily exploit a targeted vulnerability. Protecting API
endpoints from business logic abuse and other vulnerability exploits is thus a
key API security mitigation requirement. Preventing data loss over exposed
APIs for appropriately privileged users or otherwise, either due to
programming errors or security control gaps, is also a critical security
requirement.
A CEO's View Of Data Quality And Governance
To thrive, you must be intentional with your data strategy, by both knowing
and trusting the data upon which your organization relies. Knowing your data
implies a governance program. In specific terms, it involves harvesting and
managing metadata, or "data about the data." Metadata management is a key
tenet of a data governance program. Without it, you can only rely on tribal
knowledge within your organization. Yet ever-growing volumes of data have
rendered that not only impractical, but nearly impossible. Trusting your data
implies validating and monitoring the quality and state at the point in time
and place where you apply it in your business processes. Most likely your
competitors are racing to achieve data management superiority ahead of you.
Doing so is vital to not only mitigate regulatory compliance risk, but also to
engage customers and stakeholders, optimize the outcome of key business
initiatives, such as shortening the time required for new product launch, and
apply analytics to inform daily decisions and long-term strategy. Making sure
that your data strategy supports your business is a new responsibility for
senior leadership.
Predictions for Data Science and AI
When it comes to data science and AI, we’re expecting continued growth of
specialization in these roles. Divided into two categories, ‘engineering-heavy’
and ‘science-heavy’ data science roles focus on different aspects of the data
science space. ‘Engineering-heavy’ roles include machine learning, AI, and data
engineers, focusing on the infrastructure, platforms, and production systems.
‘Science-heavy’ data science roles include analytics consultants, data
scientists, and business analytics specialists, focusing on decision and inquiry
work. Demands for these roles are increasing rapidly as AI models move to
production on a daily. The trend in the specialization will continue to grow
over the next few years for sure. Being multi-talented definitely garners a lot
of attention, however recruiting those who excel and specialize in one field
will help you greatly in the long term (when you’re investing in your startup).
More Tools, More Confusion In this digital age, you can easily find a plethora of tools available to complete any technical task. Although there are several ways of approaching a particular problem, this causes great confusion amongst people new to the industry.
More Tools, More Confusion In this digital age, you can easily find a plethora of tools available to complete any technical task. Although there are several ways of approaching a particular problem, this causes great confusion amongst people new to the industry.
Quote for the day:
"Success seems to be connected to action. Successful people keep moving.
They make mistakes, but they don't quit." -- Conrad Hilton
No comments:
Post a Comment