AI generated deepfake fraud drives public appetite for biometrics: FIDO Alliance
“People don’t need to be tech-savvy, the tools are easily accessible online.
Deepfakes are as easy as self-service, and this accessibility introduces a
significant risk to organizations. How can financial institutions protect
themselves against, well, themselves?” The answer, he says, is reliable
biometric detection capable of running digital video against biometrically
captured data to weed out digital replicas. “Protecting against deepfakes
includes layering your processes with multiple checks and balances, all designed
to make it increasingly complicated for fraudsters to pull off a successful
scam.” For user identity and accessibility checks, he says it is essential to
offer “seamless biometric identity verification systems that don’t feel
intrusive but do offer increased trust.” “Companies need a strict onboarding
process that asks for both biometric and physical proof of identity; that way,
security systems can immediately verify someone’s identity. This includes the
use of liveness detection and deepfake detection – ensuring a real person is at
the end of the camera – and ensuring secure and accurate information
authentication and encryption.”
The State of DevOps in the Enterprise
Unfortunately, few if any sites have fully automated DevOps solutions that can
keep pace with Agile, no-code and low-code application development -- although
everyone has a vision of one day achieving improved infrastructure automation
for their applications and systems. ... Infrastructure as code is a method
that enables IT to pre-define IT infrastructure for certain types of
applications that are likely to be created. By predefining and standardizing
the underlying infrastructure components for running new applications on
Linux, for instance, you can ensure repeatability and predictability of
performance of any application deployed on Linux, which will speed
deployments. ... If you’re moving to more operational automation and methods
like DevOps and IaC that serve as back-ends to applications in Agile, no code
and low code, cross-disciplinary teams of end users, application developers,
QA, system programmers, database specialists and network specialists must team
together in an iterative approach to application development, deployment and
maintenance.
A Blueprint for the Future: Automated Workflow Design
Given the multitude of processes organizations manage, the ability to edit
existing workflows or start not from scratch but from a best practice
template, assisted by generative AI, holds good potential. I believe this
represents another significant step toward enterprise autonomy. This is apt
as Blueprint nicely fits into Pega’s messaging that is centred on the
concept of the autonomous enterprise. ... In the future, we could see
Process Intelligence (PI) integrated with templates and generative AI,
pushing the automation of the design process even further. PI identifies
which workflows need improving and where. By feeding these insights into an
intelligent workflow design tool like Blueprint, we could eventually see
workflows being automatically updated to resolve the identified issues. Over
time, we might even reach a point where a continuous automated process
improvement cycle can be established. This cycle would start with PI
capturing insights and feeding them into a Blueprint-like tool to generate
updated and improved workflows. These would then be fed into an automated
test and deployment platform to complete the improvement, overseen by a
supervising AI or human.
Considerations for AI factories
The new way of thinking is that the “rack is the new server” enables data
center operators to create a scalable solution by thinking at the rack
level. Within a rack, an entire solution for AI training can be
self-contained, with expansion for higher needs for performance readily
available. A single rack can contain up to eight servers, each with eight
interconnected GPUs. Then, each GPU can communicate with many other GPUs
located in the rack, as the switches can be contained in the rack. The same
communication can be set up between racks for scaling beyond a single rack,
enabling a single application to use thousands of GPUs. Within an AI
factory, different GPUs can be used. Not all applications or their
agreed-upon SLAs require the fastest GPUs on the market today. Less powerful
GPUs may be entirely adequate for many environments and will typically
consume less electricity. In addition, these very dense servers with GPUs
require liquid cooling, which is optimal if the coolant distribution unit
(CDU) is also located within the rack, which reduces the hose lengths.
5 Agile Techniques To Help Avoid a CrowdStrike-Like Issue
Agile is exceptionally good at giving a safe playpen to look around a
project, for issues the team may not have focused on initially. It channels
people’s interest in areas without losing track of resources. By definition,
no one in an organization will spend time considering the possible outcome
of things that they have no experience of. However, by pushing on the
boundaries of a project, even if based only on hunches or experience,
insights arrive. Even if the initial form of a problem cannot be foreseen,
the secondary problems can often be. ... The timebox correctly assumes that
if a solution requires jumping down a deep rabbit hole, then the solution
may not be applicable in the time constraints of the project. This is a good
way to understand how no software is an “ultimate solution,” but simply the
right way to do things for now, given the resources available. ... Having
one member of a team question another member is healthy, but can also create
friction. Sometimes the result is just an additional item on a checklist,
but sometimes it can trigger a major rethink of the project as a whole.
How to review code effectively: A GitHub staff engineer’s philosophy
Code reviews are impactful because they help exchange knowledge and increase
shipping velocity. They are nice, linkable artifacts that peers and managers
can use to show how helpful and knowledgeable you are. They can highlight
good communication skills, particularly if there’s a complex or
controversial change needed. So, making your case well in a code review can
not only guide the product’s future and help stave off incidents, it can be
good for your career. ... As a reviewer, clarity in communication is key.
You’ll want to make clear which of your comments are personal preference and
which are blockers for approval. Provide an example of the approach you’re
suggesting to elevate your code review and make your meaning even clearer.
If you can provide an example from the same repository as the pull request,
even better—that further supports your suggestion by encouraging consistent
implementations. By contrast, poor code reviews lack clarity. For example, a
blanket approval or rejection without any comments can leave the pull
request author wondering if the review was thorough.
Goodbye? Attackers Can Bypass 'Windows Hello' Strong Authentication
Smirnov says his discovery does not indicate that WHfB is insecure. "The
insecure part here is not regarding the protocol itself, but rather how the
organization forces or does not force strong authentication," he says.
"Because what's the point of phishing-resistant authentication if you can
just downgrade it to something that is not phishing-resistant?" Smirnov
maintains that because of how the WHfB protocol is designed, the entire
architecture is phishing resistant. "But since Microsoft, back at the time,
had no way to allow organizations to enforce sign-in using this
phishing-resistant authentication method, you could always downgrade to a
lesser secure authentication method like password and SMS-OTP," Smirnov
says. When a user initially registers Windows Hello on their device, the
WHiB's authentication mechanism creates a private key credential stored in
the computer's TPM. The private key is inaccessible to an attacker because
it is sandboxed on the TPM, therefore requiring an authentication challenge
using a Windows Hello-compatible biometric key or PIN as a sign-in
challenge.
Cybersecurity ROI: Top metrics and KPIs
The overall security posture of an organization can be quantified by
tracking the number and severity of vulnerabilities before and after
implementing security measures. A key indicator is the reduction in
remediation activities while maintaining or improving the security posture.
This can be measured in terms of work hours or effort saved. Traditional
metrics for this measurement include the number of detected incidents, Mean
Time to Detect (MTTD), Mean Time to Respond (MTTR), and patch management
(average time to deploy fixes). Awareness training and measuring phishing
success rates are also crucial. ... Evaluating the cost-effectiveness of
risk mitigation strategies is paramount. This includes comparing the costs
of various security measures against the potential losses from security
incidents and tying that figure back to patch management, paired up against
the number of vulnerabilities remediated. With modern programs, enterprises
are empowered to remediate what matters most from a risk perspective. All in
all, a remediation cost is a better measure of an organization’s overall
security posture than the cost of an incident.
Agentic AI drives enterprises away from public clouds
Decoupled and distributed systems running AI agents require hundreds of
lower-powered processors that need to run independently. Cloud computing is
typically not a good fit for this. However, it can still be a node within
these distributed AI agents that run on heterogeneous and complex
deployments outside public cloud solutions. The ongoing maturation of
agentic AI will further incentivize the move away from the public cloud.
Enterprises will increasingly invest in dedicated hardware tailored to
specific AI tasks, from intelligent Internet of Things devices to
sophisticated on-premises servers. This transition will necessitate robust
integration frameworks to ensure seamless interaction between diverse
systems, optimizing AI operations across the board. ... Integrating agentic
AI marks a significant pivot in enterprise strategy, driving companies away
from public cloud solutions. By adopting non-public cloud technologies and
investing in adaptable, secure, and cost-efficient infrastructure,
enterprises can fully leverage the potential of agentic AI.
Learn About Data Privacy and How to Navigate the Information Security Regulatory
LandscapeRegulators have made it clear that they are actively monitoring
compliance with new state privacy laws. Even if the scope of exposure is
relatively low due to partial exemptions, documenting compliance can be key.
While companies are struggling to keep up with the expanding patchwork,
regulators are also struggling to find the manpower to investigate the huge
scope of companies coming under their jurisdiction. ... With the continual
rise in cyber threats and a constantly evolving regulatory landscape for
data privacy and information security, staying on top of and complying with
such obligations and ensuring robust measures to protect sensitive
information remain critical priorities. ... Numerous international data
protection laws also impact the timeshare industry, but these are the
primary laws affecting American resorts. Additionally, the timeshare
industry is subject to other sector-related regulations, such as the Payment
Card Industry Data Security Standard (PCI DSS), which sets requirements for
securing payment card information for any business that processes credit
card transactions.
Quote for the day:
“When we give ourselves permission
to fail, we, at the same time, give ourselves permission to excel.” --Eloise Ristad
No comments:
Post a Comment