Web Shells Gain Sophistication for Stealth, Persistence
One reason attackers have taken to Web shells is because of their ability to stay under the radar. Web shells are hard to detect with static analysis techniques, because the files and code are so easy to modify. Moreover, Web shell traffic — because it is just HTTP or HTTPS — blends right in, making it hard to detect with traffic analysis, says Akamai's Zavodchik. "They communicate on the same ports, and it's just another page of the website," he says. "It's not like the classic malware that will open the connection back from the server to the attacker. The attacker just browses the website. There's no malicious connection, so no anomalous connections go from the server to the attacker." In addition, because there are so many off-the-shelf Web shells, attackers can use them without tipping off defenders as to their identity. The WSO-NG Web shell, for instance, is available on GitHub. And Kali Linux is open source; it's a Linux distribution focused on providing easy-to-use tools for red teams and offensive operations, and it provides 14 different Web shells, giving penetration testers the ability to upload and download files, execute command, and creating and querying databases and archives.
Will More Threat Actors Weaponize Cybersecurity Regulations?
Based on what has been disclosed thus far, the breach sounds relatively minor, but ALPHV’s SEC complaint throws the company into the spotlight. “The SEC won’t take a criminal’s word, but the spotlight is harsh. ALPHV's motives seem less about ransom, more about setting a precedent that intimidates,” Ferhat Dikbiyik, Ph.D., head of research at cyber risk monitoring company Black Kite, tells InformationWeek via email. “MeridianLink's challenge now is to navigate this tightrope of disclosure and investigation, all while under the public and regulatory microscope.” Dikbiyik points out that ALPHV’s SEC complaint suggests that the group may have ties in the US. The group demonstrates a strong command of English and knowledge of American corporate culture, he explains. Its knowledge of the American regulatory system is particularly indicative of potential stateside ties. “ALPHV's clear English on the dark web could be AI, but their quick SEC rule exploit? That suggests boots on the ground,” says Dikbiyik.
‘Digital Twin Brain’ Could Bridge Artificial and Biological Intelligence
“Cutting-edge advancements in neuroscience research have revealed the intricate
relationship between brain structure and function, and the success of artificial
neural networks has highlighted the importance of network architecture,” wrote
the team. “It is now time to bring these together to better understand how
intelligence emerges from the multi-scale repositories in the brain. By
mathematically modeling brain activity, a systematic repository of the
multi-scale brain network architecture would be very useful for pushing the
biological boundary of an established model.” As that systematic repository, the
team’s digital twin brain (DTB) would be capable of simulating various states of
the human brain in different cognitive tasks at multiple scales, in addition to
helping formulate methods for altering the state of a malfunctioning brain. ...
“The advantages of this research approach lie in the fact that these methods not
only simulate [biologically plausible] dynamic mechanisms of brain diseases at
the neuronal scale, at the level of neural populations, and at the brain region
level, but also perform virtual surgical treatments that are impossible to
perform in vivo owing to experimental or ethical limitations.
How hybrid cloud and edge computing can converge in your disaster recovery strategy
Hybrid cloud and edge computing are not mutually exclusive. There has been
significant growth in hybrid solutions, distributing computing intelligently to
combine the benefits of cloud and edge. A bespoke hybrid approach with proper
planning and management can enhance your business’s DR strategy. Hybrid cloud’s
scalability allows businesses to allocate additional cloud resources during a
disaster. These additional resources can be allocated to potentially replace
failed edge platforms and devices, maintaining critical applications and systems
that are servicing the business needs, while reducing the pressure of the
recovery process. The speed benefits of dedicated resources in a hybrid cloud
solution are multiplied when combined with the reduced latency and enhanced
availability of edge computing. Edge devices can be used to process data
locally, and cache essential data which can be recovered to a cloud platform in
case of a disaster. Processing on the edge and transmitting key information to
the cloud can enrich your data, and inform your DR planning.
Gaining Leadership Support for Data Governance
There is no better way to showcase positive business outcomes than by tracking
the ways in which good governance can help tackle obstacles over time. The most
obvious of such tracking methods is a data audit. Though an audit may be
slightly daunting in terms of its invasiveness in operations, it can be
indispensable in uncovering lapses in data quality and risky security gaps in
storage and retention. You can cover much of the same territory more informally
– and less invasively – through interviews and surveys with stakeholders in the
company. With a more open-ended, personalized intake of challenges in
governance, these modes of recording can capture the nuances that arise in data
integration and glitches in system compatibility, and they’re more likely to
harvest the sorts of idiosyncratic insights that might fall through the cracks
of a formal audit. Indeed, while Seiner advocates for methods of recording that
fall on the more facts-and-figures end of the spectrum – single-issue tracking,
analytics, and monitoring – he finds that “one of the most successful ways of
doing assessments is simply to talk to people.
Optimizing Risk Transfer for Systematic Cyberresilience
As cyberthreats loom large, enterprises of all sizes are increasingly
recognizing the need for cyberinsurance. Cyberinsurance offers financial
protection and support in the event of cyberattacks or data breaches. It is
predicted that by 2040, the cyberrisk transfer market will become comparable in
size to property insurance. However, navigating the cyberinsurance market can be
complex and daunting. Understanding the key considerations and making informed
decisions are crucial to ensuring adequate coverage and effective risk
management. ... In this context, alternative risk transfer solutions such as the
use of captive fronting are emerging as crucial tools for managing and
transferring cyberrisk. By leveraging a captive solution, enterprises can
enhance their cyberresilience, mitigate potential financial losses and navigate
cyberinsurance more effectively. Captives help increase the attachment point for
the insurance market and act as a solution to cover gaps in the insurance
market’s capacity. Insurers are increasingly encouraging the use of captives for
cyber.
6 green coding best practices and how to get started
While opting for SaaS dev and test tools may be generally more efficient than
installing them to run on servers, cloud apps can still suffer from bloat.
Modern DevSecOps tools often create full test environments and run all automated
checks on every commit. They can also run full security scans, code linters and
complexity analyzers, and stand up entire databases in the cloud. When the team
merges the code, they do it all over again. Some systems run on a delay and
automatically restart, perhaps on dozens of monitored branches. Observability
tools to monitor everything can lead to processing bloat and network saturation.
For example, imagine a scenario where the team activates an observability system
for all testing. Each time network traffic occurs, the observability system
messages a server about what information goes where -- essentially doubling the
test traffic. The energy consumed is essentially wasted. At best, the test
servers run slowly for little benefit. At worst, the production servers are also
affected, and outages occur.
Australia ups ante on cyber security
“The government’s ‘health check’ programme announcement is a valiant effort –
the true test will be how it goes about educating the right people across an
extremely diverse SMB landscape. ‘Concierge-style’ support only goes so far,
particularly if it doesn’t know where to go, and businesses don’t understand why
to seek it out. “The problem is SMBs don’t know how to start conversations, nor
who to turn to. Working alone makes the cost of cyber security defences
untenable, but it doesn’t have to be this way. Your local florist, corner store,
or even the grassroots neighbourhood startup can contribute to building
Australia’s resilience; they need the education to know why and how to be
government compliant, fight increasing cyber insurance premium costs, and
protect their customers’ PII [personally identifiable information] data.” On the
law enforcement side, Operation Aquila will be stepped up to target the highest
priority cybercrime threats affecting Australia, and increased global
cooperation will be sought to address cybercrime, particularly through regional
forums such as the Pacific Islands Law Officers’ Network and the ASEAN Senior
Officials Meeting on Transnational Crime.
CISA Roadmap for AI Cybersecurity: Defense of Critical Infrastructure, “Secure by Design” AI Prioritized
The first “line of effort” is a pledge to responsibly use AI to support the
mission, establishing governance and adoption procedures primarily for federal
agencies. Already at the head of federal cybersecurity programs, CISA will be
the conduit for the development of processes from safety to procurement to
ethics and civil rights. In terms of privacy and security, the agency will be
developing the NIST AI Risk Management Framework (RMF). The agency is also
creating an AI Use Case Inventory to be used in mission support, and to
responsibly and securely deploy new systems. The second line of effort
directly addresses security by design. This is another area in which
establishment and use of the RMF will be a key step, and assessing the AI
cybersecurity risks in critical infrastructure sectors is the first item on
the menu. This process also appears to involve early engagement with
stakeholders in critical infrastructure sectors. Software Bills of Materials
(SBOMs) for AI systems will also be a requirement in some capacity, though
CISA is in an “evaluation” phase at this point.
How to work with Your Auditors to Influence a Better Audit Experience
Remember, auditing with agility is a flexible, customizable audit approach
that leverages concepts from agile and DevOps to create a more value-added and
efficient audit. There are three core components to auditing with
agility:Value-driven auditing, where the scope of audit work is driven by
what’s most important to the organization Integrated auditing, where audit
work is integrated with your daily work Adaptable auditing, where audits
become nimble and can adapt to change Each core component has practices
associated with it. For example, practices associated with value-driven
auditing include satisfying stakeholders through value delivery. In my book,
Beyond Agile Auditing, I state that stakeholders "value audit work that is
focused on the highest, most relevant risks and the areas that are important
to achieving the organization’s objectives.[1]" As an auditor, I like to ask
my clients questions like "What absolutely needs to go right for you (or your
business) to be successful?" or "What can’t go wrong for you (or your
business) to be successful?" I do this to help identify what matters and what
is most valuable to my client’s business.
Quote for the day:
“Good manners sometimes means simply
putting up with other people's bad manners.” --
H. Jackson Brown, Jr
No comments:
Post a Comment