Healthcare Organisations Embrace New Technologies to Fortify Cyber Defences
Healthcare organisations have initiated partnership with others to develop
security operations centres to monitor their traffic and identify threats.
Proactive programs like threat hunting and brand monitoring have also been
preferred. ... These initiatives are being taken keeping in mind the
requirements from CERT-IN to report cyber incidents within six hours, and new
requirements under Digital Personal Data protection Act, 2023, which require the
organisation to take measures to identify sources of data, take consent and
manage the use and eventual destruction of data as per the guidelines given by
the government. “Investments in advanced IAM technologies are becoming
paramount, encompassing robust authentication methods, privileged access
controls, and continuous monitoring of user activities,” says Pramod Bhaskar,
CISO, Cross Identity. These measures align closely with regulatory changes and
compliance requirements, as regulations like HIPAA increasingly emphasise the
importance of secure user authentication, access governance, and audit trails in
safeguarding patient information.
The Window of Exposure: A Critical Component of Your Cybersecurity Strategy
The goal of any responsible security professional is to reduce the window of
exposure as much as possible. There are two basic approaches to this: limiting
the amount of vulnerability information available to the public and reducing the
window of exposure in time by issuing patches quickly. Limiting the amount of
vulnerability information available to the public might work in theory, but it
is impossible to enforce in practice. There is a continuous stream of research
in security vulnerabilities, and most of this research results in public
announcements. Hackers write new attack exploits all the time, and the exploits
quickly end up in the hands of malicious attackers. While some researchers might
choose not to publish a vulnerability they discover, public dissemination of
vulnerability information is the norm because it is the best way to improve
security. Reducing the window of exposure in time by issuing patches quickly is
the other approach. Full-disclosure proponents publish vulnerabilities far and
wide to spur vendors to patch faster.
MLflow vulnerability enables remote machine learning model theft and poisoning
Many developers believe that services bound to localhost — a computer’s internal
hostname — cannot be targeted from the internet. However, this is an incorrect
assumption according to Joseph Beeton, a senior application security researcher
at Contrast Security, who recently held a talk on attacking developer
environments through localhost services at the DefCamp security conference.
Beeton recently found serious vulnerabilities in the Quarkus Java framework and
MLflow that allow remote attackers to exploit features in the development
interfaces or APIs exposed by those applications locally. The attacks would only
require the computer user to visit an attacker-controlled website in their
browser or a legitimate site where the attacker managed to place specifically
crafted ads. Drive-by attacks have been around for many years, but they are
powerful when combined with a cross-site request forgery (CSRF) vulnerability in
an application. In the past hackers used drive-by attacks through malicious ads
placed on websites to hijack the DNS settings of users’ home routers.
Chameleon Android Trojan Offers Biometric Bypass
The variant includes several new features that make it even more dangerous to
Android users that its previous incarnation, including a new ability to
interrupt the biometric operations of the targeted device, the researchers said.
By unlocking biometric access (facial recognition or fingerprint scans, for
example), attackers can access PINs, passwords, or graphical keys through
keylogging functionalities, as well as unlock devices using previously stolen
PINs or passwords. "This functionality to effectively bypass biometric security
measures is a concerning development in the landscape of mobile malware,"
according to Threat Fabric's analysis. ... The malware's key new ability to
disable biometric security on the device is enabled by issuing the command
"interrupt_biometric," which executes the "InterruptBiometric" method. The
method uses Android's KeyguardManager API and AccessibilityEvent to assess the
device screen and keyguard status, evaluating the state of the latter in terms
of various locking mechanisms, such as pattern, PIN, or password.
The Rise of AI-Powered Applications: Large Language Models in Modern Business
AI and LLMs have fundamentally altered how people and organizations interact
with technology. While they drive innovation and automation across multiple
sectors simultaneously, they also change how professionals make decisions and
communicate with customers. They have redefined industry-specific domains while
enhancing industrial growth and innovation potential. With further development
and research, it is only a matter of time before these AI-driven models can
replicate the qualities of human speech and interaction. There is no certainty
as to the extent of AI developments and capabilities. While the potential for
innovation and development seems endless, AI’s rapid growth in business and
industry proves that developers have only reached the tip of the iceberg. As AI
functionalities become faster and more proficient, the healthcare, education,
and financial service industries will thrive further and deliver trustworthy,
reliable care and services for patients, students, and customers worldwide.
Because LLMs offer operational support in data and analytics, there will be cost
savings as professionals transfer their time and efforts elsewhere.
NIST Seeks Public Comment on Guidance for Trustworthy AI
This is the first time there has been an "affirmative requirement" for companies
developing foundational models that pose a serious risk to national security,
economic security, public health or safety to notify the federal government when
training their models, and to share the results of red team safety tests, said
Lisa Sotto, partner at Hunton Andrews Kurth and chair of the company's global
privacy and cybersecurity practice. This will have a "profound" impact on the
development of AI models in the United States, she told Information Security
Media Group. While NIST does not directly regulate AI, it helps develop
frameworks, standards, research and resources that play a significant role in
informing the regulation and the technology's responsible use and development.
Its artificial intelligence risk management framework released earlier this year
seeks to provide a comprehensive framework for managing risks associated with AI
technologies. Its recent report on bias in AI algorithms seeks to help
organizations develop potential mitigation strategies, and the Trustworthy and
Responsible AI Resource Center, launched in March, is a central repository for
information about NIST's AI activities.
Why laptops and other edge devices are being AI-enabled
You can run them in the cloud, but as well as the inevitable latency this
involves, it’s also increasingly costly both in terms of network bandwidth and
cloud compute costs. There’s also the governance issue of sending all that
potentially-sensitive and bulky data to and fro. So at the very least, doing a
first-cut and filter to reduce and/or sanitise the transmitted data volume, is
valuable in all sorts of ways. You could use the GPU or even the CPU to do this
filtering, and indeed that’s what some edge devices will be doing today.
Alternatively you could simply run the inferencing work on the local CPU or GPU
in your laptop or desktop. That works, but it’s slower. Not only can dedicated
AI hardware such as an NPU do the job much faster, it will also be much more
power-efficient. GPUs and CPUs doing this sort of work tend to run very hot, as
evidenced by the big heatsinks and fans on high-end GPUs. That power-efficiency
is useful in a desktop machine, but is much more valuable when you’re running an
ultraportable on battery, yet you still want AI-enhanced videoconferencing,
speedy photo editing, or smoother gaming and AR.
Future of wireless technology: Key predictions for 2024
New IoT technology will help unify connectivity across multiple home devices,
transforming home users’ experience with IoT devices. Matter— a new industry
standard launched in 2023 provides reliable, secure connectivity across multiple
device manufacturers. Given the weight of players involved (e.g., Apple, Amazon,
Google, Samsung SmartThings), we expect the adoption of Matter-certified
products will be exponential in the next three years, validating Wi-Fi’s central
role in the smart connected home and buildings. Pilot projects and trials of TIP
Open Wi-Fi will proliferate in developing countries and price-sensitive markets
due to its cost-effectiveness and the benefits offered by an open disaggregated
model. Well-established wireless local-area network (WLAN) vendors will continue
working to make themselves more cost-effective in these markets through massive
investment in machine learning and AI and an integrated Wi-Fi + 5G offering to
enterprises. Augmented and virtual reality will gain a larger share of our daily
lives at home and work
What developers trying out Google Gemini should know about their data
Google told ZDNET that it uses the API inputs and outputs to improve product
quality. "Human review is a necessary step of the model improvement process," a
spokesperson said. "Through review and annotation, trained reviewers help enable
quality improvements of generative machine-learning models like the ones that
power Google AI Studio and the Gemini Pro via the Gemini API." To protect
developers' privacy, Google said their data is de-identified and disassociated
from their API key and Google account, which is needed to log in to Google AI
Studio. This protection takes place done before the reviewers can see or
annotate the data. Google's Terms of Service (ToS) for its generative AI APIs
further states that the data is used to "tune models" and may be retained in
connection to the user's tuned models "[for] re-tuning when supported models
change". The ToS states: "When you delete a tuned model, the related tuning data
is also deleted." The terms also state that users should not submit sensitive,
confidential, or personal data to the AI models.
14 in-demand cloud roles companies are hiring for
As cloud computing grows increasingly complex, cloud architects have become a
vital role for organizations to navigate the implementation, migration, and
maintenance of cloud environments. These IT pros can also help organizations
avoid potential risks around cloud security, while ensuring a smooth transition
to the cloud across the company. With 65% of IT decision-makers choosing
cloud-based services by default when upgrading technology, cloud architects will
only become more important for enterprise success. ... DevOps focuses on
blending IT operations with the development process to improve IT systems and
act as a go-between in maintaining the flow of communication between coding and
engineering teams. It’s a role that focuses on the deployment of automated
applications, maintenance of IT and cloud infrastructure ... Security architects
are responsible for building, designing, and implementing security solutions in
the organization to keep IT infrastructure secure. For security architects
working in a cloud environment, the focus is on designing and implementing
security solutions that protect the business’ cloud-based infrastructure, data,
and applications.
Quote for the day:
"The meaning of life is to find your
gift. The purpose of life is to give it away." -- Anonymous
No comments:
Post a Comment