Discord CDN and API Abuses Drive Wave of Malware Detections
Discord’s CDN is being abused to host malware, while its API is being leveraged
to exfiltrate stolen data and facilitate hacker command-and-control channels,
Sophos added. Because Discord is heavily trafficked by younger gamers playing
Fortnite, Minecraft and Roblox, a lot of the malware floating around amounts to
little more than pranking, such as the use of code to crash an opponent’s game,
Sophos explained. But the spike in info stealers and remote access trojans is
more is more alarming, it added. “But the greatest percentage of the malware we
found have a focus on credential and personal information theft, a wide variety
of stealer malware as well as more versatile RATs,” the report said. “The threat
actors behind these operations employed social engineering to spread
credential-stealing malware, then use the victims’ harvested Discord credentials
to target additional Discord users.” The team also found outdated malware
including spyware and fake app info stealers being hosted on the Discord CDN.
The sixth sense of a successful leader
The Sixth Sense endowed Leader has to possess a highly developed awareness of
what needs to be done, how it needs to be done, when it needs to be done,
simultaneously anticipating the needs of the human resource involved on the
task, and continuously visualising the anticipated outcome. For successful
employment of sixth sense the Leader needs to work on the Higher Intellect
plane. This does not preclude the Leader from seeking material gains, for that
is the ultimate aim of any business. However, the Leader needs to weigh the
anticipated gains against likely social and environment degradation. Similarly,
the Leader needs to be steeped in definable values and ethics, which in turn act
as the Sixth Sense Pillar. This Pillar will be the fulcrum enabling the Leader
to leverage gains beyond cognitive reasoning, and to attain the status of a
Karma Yogi. The Sixth Sense Leader, a true Karma Yogi, empowers self to develop:
– Vision to create rather than await opportunity, by tapping dimensional
awareness of the future. Analysing and risk acceptance capability, through
capacity to subtly induce change in energy fields impacting the mission.
Why Data Management Needs An Aggregator Model
As enterprises shift to a hybrid multicloud architecture, they can no longer
manage data within each storage silo, search for data within each storage silo
and pay a heavy cost to move data from one silo to another. As GigaOm analyst
Enrico Signoretti pointed out: "The trend is clear: The future of IT
infrastructures is hybrid ... [and] it requires a different and modern approach
to data management." Another key reason an aggregator model for data management
is needed is that customers want to extract value from their data. To analyze
and search unstructured data, vital information is stored in what is called
"metadata" — information about the data itself. Metadata is like an electronic
fingerprint of the data. For example, a photo on your phone might have
information about the time and location when it was taken as well as who was in
it. Metadata is very valuable, as it is used to search, find and index different
types of unstructured data. Since storage business models are built on owning
the data, storage vendors will move some blocks when moving data to the cloud
rather than move all of the data.
Next-Gen Data Pipes With Spark, Kafka and k8s
In Lambda Architecture, there are two main layers – Batch and Speed. The
first one transforms data in scheduled batches whereas the second is
responsible for near real-time data processing. The batch layer is typically
used when the source system sends the data in batches, access to the entire
dataset is needed for required data processing, or the dataset is too large
to be handled as a stream. On the contrary, stream processing is needed for
small packets of high-velocity data, where the packets are either mutually
independent or packets in close vicinity form a context. Naturally, both
types of data processing are computation-intensive, though the memory
requirement for batch is higher than the stream layer. Architects look for
solution patterns that are elastic, fault-tolerant, performing,
cost-effective, flexible, and, last but not least – distributed. ... Lambda
architecture is complex because it has two separate components for handling
batch and stream processing of data. The complexity can be reduced if one
single technology component can serve both purposes.
Moving fast and breaking things cost us our privacy and security
Tokenized identification puts the power in the user’s hands. This is crucial
not just for workplace access and identity, but for a host of other, even
more important reasons. Tokenized digital IDs are encrypted and can only be
used once, making it nearly impossible for anyone to view the data included
in the digital ID should the system be breached. It’s like Signal, but for
your digital IDs. As even more sophisticated technologies roll out, more
personal data will be produced (and that means more data is vulnerable).
It’s not just our driver’s licenses, credit cards or Social Security numbers
we must worry about. Our biometrics and personal health-related data, like
our medical records, are increasingly online and accessed for verification
purposes. Encrypted digital IDs are incredibly important because of the
prevalence of hacking and identity theft. Without tokenized digital IDs, we
are all vulnerable. We saw what happened with the Colonial Pipeline
ransomware attack recently. It crippled a large portion of the U.S. pipeline
system for weeks, showing that critical parts of our infrastructure are
extremely vulnerable to breaches.
Agile at 20: The Failed Rebellion
In some ways, Agile was a grassroots labor movement. It certainly started
with the practitioners on the ground and got pushed upwards into management.
How did this ever succeed? It’s partially due to developers growing in
number and value to their businesses, gaining clout. But the biggest factor,
in my view, is that the traditional waterfall approach simply wasn’t
working. As software got more complicated and the pace of business
accelerated and the sophistication of users rose, trying to plan everything
up front became impossible. Embracing iterative development was logical, if
a bit scary for managers used to planning everything. I remember meetings in
the mid-2000s where you could tell management wasn’t really buying it, but
they were out of ideas. What the hell, let’s try this crazy idea the
engineers keep talking about. We’re not hitting deadlines now. How much
worse can it get? Then to their surprise, it started working, kind of, in
fits and starts. Teams would thrash for a while and then slowly gain their
legs, discovering what patterns worked for that individual team, picking up
momentum.
Is Consciousness Bound by Quantum Physics? We're Getting Closer to Finding Out
We're not yet able to measure the behavior of quantum fractals in the brain
– if they exist at all. But advanced technology means we can now measure
quantum fractals in the lab. In recent research involving a scanning
tunneling microscope (STM), my colleagues at Utrecht and I carefully
arranged electrons in a fractal pattern, creating a quantum fractal. When we
then measured the wave function of the electrons, which describes their
quantum state, we found that they too lived at the fractal dimension
dictated by the physical pattern we'd made. In this case, the pattern we
used on the quantum scale was the SierpiĆski triangle, which is a shape
that's somewhere between one-dimensional and two-dimensional. This was an
exciting finding, but STM techniques cannot probe how quantum particles move
– which would tell us more about how quantum processes might occur in the
brain. So in our latest research, my colleagues at Shanghai Jiaotong
University and I went one step further. Using state-of-the-art photonics
experiments, we were able to reveal the quantum motion that takes place
within fractals in unprecedented detail.
How Deepfakes Are Powering a New Type of Cyber Crime
Cybercriminals are always quick to leap onto any bandwagon that they can use
to improve or modernize their attacks. Audio fakes are becoming so good that
it requires a spectrum analyzer to definitively identify fakes, and AI
systems have been developed to identify deepfake videos. If manipulating
images lets you weaponize them, imagine what you can do with sound and video
fakes that are good enough to fool most people. Crimes involving faked
images and audio have already happened. Experts predict that the next wave
of deepfake cybercrime will involve video. The working-from-home,
video-call-laden “new normal” might well have ushered in the new era of
deepfake cybercrime. An old phishing email attack involves sending an email
to the victim, claiming you have a video of them in a compromising or
embarrassing position. Unless payment is received in Bitcoin the footage
will be sent to their friends and colleagues. Scared there might be such a
video, some people pay the ransom.
5 Steps to Improving Ransomware Resiliency
Enterprises need to have a robust endpoint data protection and system
security. This includes antivirus software and even whitelisting software
where only approved applications can be accessed. Enterprises need both an
active element of protection, and a reactive element of recovery. Companies
hit with a ransomware attack can spend five days or longer recovering from
an attack, so it’s imperative that companies are actively implementing the
right backup and recovery strategies before a ransomware attack. Black hats
who are developing ransomware are trying to prevent any means of egress from
an enterprise having to pay the ransom. ... We urge organizations to
implement a more comprehensive backup and recovery approach based on the
National Institute of Standards and Technology (NIST) Cybersecurity
Framework. It includes a set of best practices: Using immutable storage,
which prevents ransomware from encrypting or deleting backups; implementing
in-transit and at-rest encryption to prevent bad actors from compromising
the network or stealing your data; and hardening the environment by enabling
firewalls that restrict ports and processes.
This Week in Programming: Kubernetes from Day One?
“To move to Kubernetes, an organization needs a full engineering team just
to keep the Kubernetes clusters running, and that’s assuming a managed
Kubernetes service and that they can rely on additional infrastructure
engineers to maintain other supporting services on top of, well, the
organization’s actual product or service,” they write. While this is part of
StackOverflow’s reasoning — “The effort to set up Kubernetes is less than
you think. Certainly, it’s less than the effort it would take to refactor
your app later on to support containerization.” — Ably argues that “it seems
that introducing such an enormously expensive component would merely move
some of our problems around instead of actually solving them.” Meanwhile,
another blog post this week argues that Kubernetes is our generation’s
Multics, again centering on this idea of complexity. Essentially, the
argument here is that Kubernetes is “a serious, respectable, but overly
complex system that will eventually be replaced by something simpler: the
Unix of distributed operating systems.” Well then, back to Unix it is!
Quote for the day:
"Leaders must encourage their
organizations to dance to forms of music yet to be heard." --
Warren G. Bennis
No comments:
Post a Comment