7 ways to harden your environment against compromise
Running legacy operating systems increases your vulnerability to attacks that
exploit long-standing vulnerabilities. Where possible, look to decommission or
upgrade legacy Windows operating systems. Legacy protocols can increase risk.
Older file share technologies are a well-known attack vector for ransomware but
are still in use in many environments. In this incident, there were many
systems, including Domain Controllers, that hadn’t been patched recently. This
greatly aided the attacker in their movement across the environment. As part of
helping customers, we look at the most important systems and make sure we are
running the most up-to-date protocols that we can to further enhance an
environment. As the saying goes, “collection is not detection.” On many
engagements, the attacker’s actions are clear and obvious in event logs. The
common problem is no one is looking at them on a day-to-day basis or
understanding what normal looks like. Unexplained changes to event logs, such as
deletion or retention changes, should be considered suspicious and
investigated.
Robocorp Makes Remote Process Automation Programmable
Robocorp Lab creates a separate Conda environment for each of your robots,
keeping your robot and its dependencies isolated from the other robots and
dependencies on your system. That enables you to control the exact versions of
the dependencies you need for each of your robots. It offers RCC, a set of tools
that allows you to create, manage, and distribute Python-based self-contained
automation packages and the robot.yaml configuration file for building and
sharing automations. Control Room provides a dashboard to centrally control
and monitor automations across teams, target systems or clients. It offers the
ability to scale with security, governance, and control. There are two options
for Control Room: a cloud version and a self-managed version for private cloud
or on-premises deployment. The platform allows users to write extensions or
customizations in Python, a limitation with proprietary systems, according to
Karjalainen, and to extend automations with third-party tools for AI, machine
learning, optical character recognition or natural language understanding.
How Your Application Architecture Has Evolved
Distributed infrastructure on the cloud is great but there is one problem. It is
very unpredictable and difficult to manage compared to a handful of servers in
your own data center. Running an application in a robust manner on distributed
cloud infrastructure is no joke. A lot of things can go wrong. An instance of
your application or a node on your cluster can silently fail. How do you make
sure that your application can continue to run despite these failures? The
answer is microservices. A microservice is a very small application that is
responsible for one specific use-case, just like in service-oriented
architecture but is completely independent of other services. It can be
developed using any language and framework and can be deployed in any
environment whether it be on-prem or on the public cloud. Additionally, they can
be easily run in parallel on a number of different servers in different regions
to provide parallelization and high availability.
Satellites Can Be a Surprisingly Great Option for IoT
IoT technologies tend to have a few qualities in common. They're designed to be
low-power, so that the batteries on IoT devices aren't sapped with every
transmission. They also tend to be long-ranging, to cut down on the amount of
other infrastructure required to deploy a large-scale IoT project. And they're
usually fairly robust against interference, because if there are dozens,
hundreds, or even thousands of devices transmitting, messages can't afford to be
garbled by one another. As a trade-off, they typically don't support high data
rates, which is a fair concession to make for many IoT networks' smart metering
needs. ... Advancements in satellites are only accelerating the possibilities
opened up by putting IoT technologies into orbit. Chief among those advancements
is the CubeSat revolution, which is both shrinking and standardizing satellite
construction. "We designed all the satellites when we were four people, and by
the time we launched, we were about 10 people," says Longmier. "And that wasn't
possible five years before we started."
Tech giants unite to drive ‘transformational’ open source eBPF projects
“It will be the responsibility of the eBPF Foundation to validate and certify
the different runtime implementations to ensure portability of applications.
Projects will remain independently governed, but the foundation will provide
access to resources to foster all projects and organize maintenance and further
development of the eBPF language specification and the surrounding supporting
projects.” The new foundation serves as further evidence that open source is now
the accepted model for cross-company collaboration, playing a major part in
bringing the tech giants of the world together. Sarah Novotny, Microsoft’s open
source lead for the Azure Office of the CTO, recently said that open source
collaboration projects can enable big companies to bypass much of the lawyering
to join forces in weeks rather than months. “A few years ago if you wanted to
get several large tech companies together to align on a software initiative,
establish open standards, or agree on a policy, it would often require several
months of negotiation, meetings, debate, back and forth with lawyers … and did
we mention the lawyers?” she said. “Open source has completely changed this.”
The Importance of Properly Scoping Cloud Environments
A CSP should be viewed as a partner in protecting payment data rather than the
common assumption that all responsibility has been completely outsourced. The
use of a CSP for payment security related services does not relieve an
organization of the ultimate responsibility for its own security obligations, or
for ensuring that its payment data and payment environment are secure. Much of
this misunderstanding comes from simply not including payment data security as
part of the conversation and how requirements, such as those in PCI DSS, will be
met. ... Third-Party Service Provider Due Diligence: When selecting a CSP,
organizations should vet CSP candidates through careful due diligence prior to
establishing a relationship and explicit understanding of which entity will
assume management and oversight of security. This will assist organizations in
reviewing and selecting CSPs with the skills and experience appropriate for the
engagement.
The Difference Between Data Scientists and ML Engineers
The majority of the work performed by Data Scientists is in the research
environment. In this environment, Data Scientists perform tasks to better
understand the data so they can build models that will best capture the data’s
inherent patterns. Once they’ve built a model, the next step is to evaluate
whether it meets the project's desired outcome. If it does not, they will
iteratively repeat the process until the model meets the desired outcome before
handing it over to the Machine Learning Engineers. Machine Learning Engineers
are responsible for creating and maintaining the Machine Learning infrastructure
that permits them to deploy the models built by Data Scientists to a production
environment. Therefore, Machine Learning Engineers typically work in the
development environment which is where they are concerned with reproducing the
machine learning pipeline built by Data Scientists in the research environment.
And, they work in the production environment which is where the model is made
accessible to other software systems and/or clients.
A remedial approach to destructive IoT hacks
Automating security is critical to scaling IoT technologies without the need to
scale headcount to secure them. To keep up with manual inventory, patching and
credential management of just one device it takes 4 man-hours per year. If an
organization has 10,000 devices, that nets out to 40,000 man-hours per year to
keep those devices secure. This is an impossible number of working hours unless
the business has a staff of 20 dedicated to the cause. To continuously secure
the thousands, or even tens of thousands, of devices on an organization’s
networks, automation is necessary. With the mass scale of IoT devices and the
opportunities to strike in every office and facility, automated identification,
and inventory of each device so that security teams can understand how it
communicates with other devices, systems and applications, and which people have
access to it is crucial. Once identified, automation technology allows for
policy compliance and enforcement by patching firmware and updating passwords,
defending your IoT as thoroughly as your other endpoints.
Malicious Docker Images Used to Mine Monero
These malicious containers are designed to easily be misidentified as official
container images, even though the Docker Hub accounts responsible for them are
not official accounts. "Once they are running, they may look like an innocent
container. After running, the binary xmrig is executed, which hijacks
resources for cryptocurrency mining," the researchers note. Morag says social
engineering techniques could be used to trick someone into using these
container images. "I guess you will never log in to the webpage mybunk[.]com,
but if the attacker sent you a link to this namespace, it might happen," he
says. "The fact is that these container images accumulated 10,000-plus pulls,
each." While it is unclear who’s behind the scheme, the Aqua Security
researchers found that the malicious Docker Hub account was taken down after
Docker was notified by Aqua Security, according to the report. Morag explains
that these containers are not directly controlled by a hacker, but there's a
script at entrypoint/cmd that is aimed to execute an automated attack. In this
case, the attacks were limited to hijacking computing resources to mine
cryptocurrency.
Leveraging the Agile Manifesto for More Sustainability
Often the first thing that comes to mind is the “sustainable pace,” as pointed
out by the 8th principle of the Agile Manifesto: “Agile processes promote
sustainable development. The sponsors, developers, and users should be able to
maintain a constant pace indefinitely.” So, sustainability in this sense will
ensure people will not be burned out by an insane deadline. Instead, a
sustainable pace ensures a delivery speed that can be kept up for an infinite
time. This understanding of sustainability falls into the profit perspective
of the triple bottom line. Another way sustainability is often understood in
the agile community is by focusing on sustaining agility in companies. This
means, agility and/or agile development will govern the work even after, for
example, external consultants and trainers are gone. The focus is then on how
to build a sustainable agile culture or on sustainable agile transformations.
Over all these years, the agile manifesto has served me well in providing
guidance, even for areas it hasn’t originally been defined for.
Quote for the day:
"Leaders dig into their business to
learn painful realities rather than peaceful illusion." --
Orrin Woodward
I found your blog on Google and read a few of your other posts. I just added you to my Google News Reader. You can also visit Office movers near me for more IT Recycling Solution related information and knowledge, Keep up the great work Look forward to reading more from you in the future.
ReplyDelete