The future of IoT: 5 major predictions for 2021
Certainly COVID-19 continues to globally plague, and the research predicts
that connected device makers will double efforts for healthcare. But COVID-19
forced many of those who were ill to stay at home or delay necessary care.
This has left chronic conditions unmanaged, cancers undetected, and
preventable conditions unnoticed. "The financial implications of this
loom large for consumers, health insurers, healthcare providers, and
employers." Forrester's report stated. There will be a surge in interactive
and proactive engagement such as wearables and sensors, which can detect a
patient's health while they are at home. Post-COVID-19 healthcare will be
dominated by digital-health experiences and will improve the effectiveness of
virtual care. The convenience of at-home monitoring will spur consumers'
appreciation and interest in digital health devices as they gain greater
insight into their health. Digital health device prices will become more
consumer friendly. The Digital Health Center of Excellence, established
by the FDA, is foundational for the advancement and acceptance of digital
health. A connected health-device strategy devised by healthcare insurers will
tap into data to improve understanding of patient health, personalization, and
healthcare outcomes.
A Bazar start: How one hospital thwarted a Ryuk ransomware outbreak
We’ve been following all the recent reporting and tweets about hospitals being
attacked by Ryuk ransomware. But Ryuk isn’t new to us… we’ve been tracking it
for years. More important than just looking at Ryuk ransomware itself, though,
is looking at the operators behind it and their tactics, techniques, and
procedures (TTPs)—especially those used before they encrypt any data. The
operators of Ryuk ransomware are known by different names in the community,
including “WIZARD SPIDER,” “UNC1878,” and “Team9.” The malware they use has
included TrickBot, Anchor, Bazar, Ryuk, and others. Many in the community have
shared reporting about these operators and malware families (check out the end
of this blog post for links to some excellent reporting from other teams), so
we wanted to focus narrowly on what we’ve observed: BazarLoader/BazarBackdoor
(which we’re collectively calling Bazar) used for initial access, followed by
deployment of Cobalt Strike, and hours or days later, the potential deployment
of Ryuk ransomware. We have certainly seen TrickBot lead to Ryuk ransomware in
the past. This month, however, we’ve observed Bazar as a common initial access
method, leading to our assessment that Bazar is a greater threat at this time
for the eventual deployment of Ryuk.
Getting started with DevOps automation
We often think of the term “DevOps” as being synonymous with “CI/CD”. At
GitHub we recognize that DevOps includes so much more, from enabling
contributors to build and run code (or deploy configurations) to improving
developer productivity. In turn, this shortens the time it takes to build and
deliver applications, helping teams add value and learn faster. While CI/CD
and DevOps aren’t precisely the same, CI/CD is still a core component of
DevOps automation. Continuous integration (CI) is a process that implements
testing on every change, enabling users to see if their changes break anything
in the environment. Continuous delivery (CD) is the practice of building
software in a way that allows you to deploy any successful release candidate
to production at any time. Continuous deployment (CD) takes continuous
delivery a step further. With continuous deployment, every successful change
is automatically deployed to production. Since some industries and
technologies can’t immediately release new changes to customers (think
hardware and manufacturing), adopting continuous deployment depends on your
organization and product. Together, continuous integration and continuous
delivery (commonly referred to as CI/CD) create a collaborative process for
people to work on projects through shared ownership.
Challenges in operationalizing a machine learning system
Once data is gathered and explored, it is time to perform feature engineering
and modeling. While some methods require strong domain knowledge to make
sensible decisions feature engineering decisions, others can learn
significantly from the data. Models such as logistic regression, random
forest, or deep learning techniques are then run to train the algorithms.
There are multiple steps involved here and keeping track of experiment
versions is essential for governance and reproducibility of previous
experiments. Hence, having both the tools and IDE around managing experiments
with Jupyter notebook, scripts, and others is essential. Such tools require
provisioning of hardware and proper frameworks to allow data scientists to
perform their jobs optimally. After the model is trained and performing well,
in order to leverage the output of this machine learning initiative, it is
essential to deploy the model into a product whether that is on the cloud or
directly “on the edge”. ... If you have large set inputs you would like to get
the predictions on them without any immediate latency requirements, you can
run batch inference in a regular cycle or with a trigger
The CFO's guide to data management
"New technologies using machine learning, natural language processing, and
advanced analytics can help finance leaders fix or work around many data
problems without the need for large-scale investment and company-wide
upheaval,'' Deloitte said. In fact, such technologies are already being
used to help improve corporate-level forecasting, automate
reconciliations, streamline reporting, and generate customer and financial
insights, according to the firm. Why are CFOs getting involved in data
management? "Business decisions based on insights derived from data are
now critical to organizational performance and are becoming an essential
part of a company's DNA," explained Victor Bocking, managing director,
Deloitte Consulting LLP, in a statement. "CFOs and other C-level
executives are getting more directly involved, partnering with their CIOs
and CDOs [chief data officer] in leading the data initiatives for the
parts of the business they are responsible for." As companies generate
more and more data each day, finance teams have seemingly limitless
opportunities to glean new insights and boost their value to the business.
But doing that is easier said than done, the firm noted. The problem is
the amount of data emanating daily from various sources can be
overwhelming. Deloitte's Finance 2025 series calls this "the data
tsunami."
Can automated penetration testing replace humans?
To answer this question, we need to understand how they work, and
crucially, what they can’t do. While I’ve spent a great deal of the past
year testing these tools and comparing them in like-for-like tests against
a human pentester, the big caveat here is that these automation tools are
improving at a phenomenal rate, so depending on when you read this, it may
already be out of date. First of all, the “delivery” of the pen test is
done by either an agent or a VM, which effectively simulates the
pentester’s laptop and/or attack proxy plugging into your network. So far,
so normal. The pentesting bot will then perform reconnaissance on its
environment by performing scans a human would do – so where you often have
human pentesters perform a vulnerability scan with their tool of choice or
just a ports and services sweep with Nmap or Masscan. Once they’ve
established where they sit within the environment, they will filter
through what they’ve found, and this is where their similarities to
vulnerability scanners end. Vulnerability scanners will simply list a
series of vulnerabilities and potential vulnerabilities that have been
found with no context as to their exploitability and will simply
regurgitate CVE references and CVSS scores.
'Credible threat': How to protect networks from ransomware
Ransomware attacks are becoming more rampant now that criminals have
learned they are an effective way to make money in a short amount of time.
Attackers do not even need any programming skills to launch an attack
because they can obtain code that is shared among the many hacker
communities. There are even services that will collect the ransom via
Bitcoin on behalf of the attackers and just require them to pay a
commission. This all makes it more difficult for the authorities to
identify an attacker.Many small and medium-size businesses pay ransoms
because they do not backup their data and do not have any other options
available to recover their data. They sometimes face the decision of
either paying the ransom or being forced out of business ... To prevent
from becoming a ransomware victim, organizations need to protect their
network now and prioritize resources. These attacks will only continue to
grow, and no organization wants to be displayed by the media as being
forced to pay a ransom. If you are forced to pay, customers can lose trust
in your organization’s ability to secure their personal data and the
company can see decreases in revenue and profit.
4 Types Of Exploits Used In Penetration Testing
Stack Based Exploits - This is possibly the most common sort of exploit
for remotely hijacking the code execution of a process. Stack-based buffer
overflow exploits are triggered when the data above the stack space has
been filled out. The stack refers to a chunk of the process memory or a
data structure that operates LIFO (Last in first out). The attackers can
try to force some malicious code on the stack, which may redirect the
program’s flow and perform the malicious program that the attacker intends
to implement. The attacker does this by overwriting the return pointer so
that the flow of control is passed to malicious code. Integer Bug
Exploits - Integer bugs occur due to programmers not foreseeing the
semantics of C operations, which are often found and exploited by threat
actors. The difference between integer bugs and other exploitation types
is that they are often exploited indirectly. Likewise, the security costs
of integer bugs are profoundly critical. Since integer bugs are triggered
indirectly, it enables an attacker to compromise other aspects of the
memory, securing control over an application. Even if you resolve malloc
errors, buffer overflows, or even format string bugs, many integer
vulnerabilities would still be rendered exploitable.
AI-Enabled DevOps: Reimagining Enterprise Application Development
AI and ML play a key role in accelerating digital transformation across
use cases – from data gathering and management to analysis and insight
generation. Enterprises that have adopted AI and ML effectively are better
positioned to enhance productivity and improve the customer experience by
swiftly responding to changing business needs. DevOps teams can leverage
AI for seamless collaboration, incident management, and release delivery.
They can also quickly iterate and personalize application features via
hypothesis-driven testing. For instance, Tesla recently enhanced its cars’
performance through over-the-air updates without having to recall a single
vehicle. Similarly, periodic performance updates to biomedical devices can
help extend their shelf-life and improve patient care significantly. These
are just a few examples of how AI-enabled DevOps can foster innovation to
drive powerful outcomes across industries. DevOps teams can innovate using
the next-gen, cost-effective AI and ML capabilities offered by major cloud
providers like AWS, Microsoft Azure, and Google Cloud. They offer access
to virtual machines with all required dependencies to help data scientists
build and train models on high power GPUs for demand and load forecasting,
text/audio/video analysis, fraud prevention, etc.
What the IoT Cybersecurity Improvement Act of 2020 means for the future of connected devices
With a constant focus on innovation in the IoT industry, oftentimes security
is overlooked in order to rush a product onto shelves. By the time devices
are ready to be purchased, important details like vulnerabilities may not
have been disclosed throughout the supply chain, which could expose and
exploit sensitive data. To date, many companies have been hesitant to
publish these weak spots in their device security in order to keep it under
wraps and their competition and hackers at bay. However, now the bill
mandates contractors and subcontractors involved in developing and selling
IoT products to the government to have a program in place to report the
vulnerabilities and subsequent resolutions. This is key to increasing
end-user transparency on devices and will better inform the government on
risks found in the supply chain, so they can update guidelines in the bill
as needed. For the future of securing connected devices, multiple
stakeholders throughout the supply chain need to be held accountable for
better visibility and security to guarantee adequate protection for
end-users.
Quote for the day:
"The great leaders have always stage-managed their effects." -- Charles de Gaulle
No comments:
Post a Comment