Passkeys: they're not perfect but they're getting better
Users are largely unsure about the implications for their passkeys if they
lose or break their device, as it seems their device holds the entire
capability to authenticate. To trust passkeys as a replacement for the
password, users need to be prepared and know what to do in the event of losing
one – or all – of their devices. ... Passkeys are ‘long life’ because users
can’t forget them or create one that is weak, so if they’re done well there
should be no need to reset or update them. As a result, there’s an increased
likelihood that at some point a user will want to move their passkeys to the
Credential Manager of a different vendor or platform. This is currently
challenging to do, but FIDO and vendors are actively working to address this
issue and we wait to see support for this take hold across the market. ... For
passkey-protected accounts, potential attackers are now more likely to focus
on finding weaknesses in account recovery and reset requests – whether by
email, phone or chat – and pivot to phishing for recovery keys. These
processes need to be sufficiently hardened by providers to prevent trivial
abuse by these attackers and to maintain the security benefits of using
passkeys. Users also need to be educated on how to spot and report abuse of
these processes before their accounts are compromised.
Securing Payment Software: How the PCI SSF Modular System Enhances Flexibility and Security
The framework was introduced to replace the aging Payment Application Data
Security Standard (PA-DSS), which primarily focused on payment application
security. As software development technologies and methodologies rapidly
evolved, the need for a dynamic and adaptable security standard became
increasingly apparent. Consequently, this realization prompted the creation of
the PCI SSF. As a result, the PCI SSF encompasses a broader range of security
requirements specifically tailored for modern software environments. ... The
modular system of the PCI SSF is specifically designed to offer both
flexibility and scalability, thereby enabling organizations to address their
specific security needs based on their unique software environments. In
addition, the modular approach allows organizations to select and implement
only the components relevant to their software, which, in turn, simplifies the
process of achieving and maintaining compliance. ... The PCI SSF’s modular
system marks a transformative step in payment software security, effectively
balancing adaptability with comprehensive protection against evolving cyber
threats. Moreover, its flexible, scalable, and comprehensive approach allows
organizations to tailor their security efforts to their unique needs, thereby
ensuring robust protection for payment data.
The cloud cost wake-up call I predicted
Cloud computing starts as a flexible and budget-friendly option, especially
with its enticing pay-per-use model. However, unchecked growth can turn this
dream into a financial nightmare due to the complexities the cloud introduces.
According to the Flexera State of the Cloud Report, 87% of organizations have
adopted multicloud strategies, complicating cost management even more by
scattering workloads and expenses across various platforms. The rise of
cloud-native applications and microservices has further complicated cost
management. These systems abstract physical resources, simplifying development
but making costs harder to predict and control. Recent studies have revealed
that 69% of CPU resources in container environments go unused, a direct
contradiction of optimal cost management practices. Although open-source tools
like Prometheus are excellent for tracking usage and spending, they often fall
short as organizations scale. ... A critical component of effective cloud cost
management is demystifying cloud pricing models. Providers often lay out their
pricing structures in great detail, but translating them into actual costs can
be difficult. A lack of understanding can lead to spiraling costs.
Using cognitive diversity for stronger, smarter cyber defense
Cognitive biases significantly influence decision-making during cybersecurity
incidents by framing how individuals interpret information, assess risks, and
respond to threats. ... Integrating cognitive science into cybersecurity tools
involves understanding how human cognitive processes – such as perception,
memory, decision-making, and problem-solving – affect security tasks.
Designing user-friendly tools requires aligning cognitive models with diverse
user behaviors while managing cognitive load, ensuring usability without
compromising security, and adapting to the fast-changing cybersecurity
landscape. Interfaces must cater to varying skill levels, promote awareness,
and support effective decision-making, all while addressing ethical
considerations like privacy and bias. Interdisciplinary collaboration between
psychology, computer science, and cybersecurity experts is essential but
challenging due to differences in expertise and communication styles. ...
Cognitive diversity can frequently divert resources or distract from present,
immediate or emerging threats. Focus on the things that are likely to happen.
Implement defensive measures which require little resource while more complex
measures are prioritized.
Next-gen Ethernet standards set to move forward in 2025
Beyond the big-ticket items of higher bandwidth and AI, a key activity in any
year for Ethernet is interoperability testing for all manner of existing and
emerging specifications. 200 Gigabits per second per lane is an important
milestone on the path to an even higher bandwidth Ethernet specification that
will exceed 1 Terabit per second. ... With 800GbE now firmly established,
adoption and expansion into ever larger bandwidth will be a key theme in 2025.
There will be no shortage of vendors offering 800 GbE equipment in 2025, but
when it comes to Ethernet standards, focus will be on 1.6 Terabits/second
Ethernet. “As 800GbE has come to market, the next speed for Ethernet is being
talked about already,” Martin Hull, vice president and general manager for
cloud and AI platforms at Arista Networks, told Network World. “1.6Tb Ethernet
is being discussed in terms of the optics, the form factors and use cases, and
we expect industry leaders to be trialing 1.6T systems towards the end of
2025.” ... “High-speed computing requires high bandwidth and reliable
interconnect solutions,” Rodgers said. “However, high-speed also means high
power and higher heat, placing more demands on the electrical grid and
resources and creating a demand for new options.” That’s where LPOs will fit
in.
Stop wasting money on ineffective threat intelligence: 5 mistakes to avoid
“CTI really needs to fall underneath your risk management and if you don’t
have a risk management program you need to identify that (as a priority),”
says Ken Dunham, cyber threat director for the Qualys Threat Research Unit.
“It really should come down to: what are the core things you’re trying to
protect? Where are your crown jewels or your high value assets?” Without risk
management to set those priorities, organizations will not be able to
appropriately set requirements for intelligence collection that will have them
gather the kind of relevant sources that pertain to their most valuable
assets. ... Bad intelligence can often be worse than none, leading to a lot of
time wasted by analysts to validate and contextualize poor quality feeds. Even
worse, if this work isn’t done appropriately, poor quality data could
potentially even lead to misguided choices at the operational or strategic
level. Security leaders should be tasking their intelligence team with
regularly reviewing the usefulness of their sources based on a few key
attributes. ... Even if CTI is doing an excellent job collecting the right
kind of quality intelligence that its stakeholders are asking for, all that
work can go for naught if it isn’t appropriately routed to the people that
need it — in the format that makes sense for them.
Exposure Management: A Strategic Approach to Cyber Security Resource Constraint
XM is a proactive and integrated approach that provides a comprehensive view
of potential attack surfaces and prioritises security actions based on an
organisation’s specific context. It’s a process that combines cloud security
posture, identity management, internal hosts, internet-facing hosts and threat
intelligence into a unified framework, enabling security teams to anticipate
potential attack vectors and fortify their defences effectively. Unlike
traditional security measures, XM takes an “outside-in” approach, assessing
how attackers might exploit vulnerabilities across interconnected systems.
This shift in mindset is crucial for identifying and prioritising the most
significant threats. By focusing on the most critical vulnerabilities and
potential attack paths, XM allows security teams to allocate resources more
efficiently and enhance their overall security posture. ... By providing a
unified view of the entire attack path, XM improves an organisation’s ability
to manage security risks. This unified view allows security teams to
understand how vulnerabilities can be exploited and prioritise those that pose
the greatest risk. Security teams are then able to guarantee efficient
resource allocation and focus on threats with the most significant impact on
business operations.
How GenAI is Exposing the Limits of Data Centre Infrastructure
Energy intensive Graphics Processing Units (GPUs) that power AI platforms
require five to 10 times more energy than Central Processing Units (CPUs),
because of the larger number of transistors. This is already impacting data
centres. There are also new, cost-effective design methodologies incorporating
features such as 3D silicon stacking, which allows GPU manufacturers to pack
more components into a smaller footprint. This again increases the power
density, meaning data centres need more energy, and create more heat. Another
trend running in parallel is a steady fall in TCase (or Case Temperature) in
the latest chips. TCase is the maximum safe temperature for the surface of
chips such as GPUs. It is a limit set by the manufacturer to ensure the chip
will run smoothly and not overheat, or require throttling which impacts
performance. On newer chips, T Case is coming down from 90 to 100 degrees
Celsius to 70 or 80 degrees, or even lower. This is further driving the demand
for new ways to cool GPUs. As a result of these factors, air cooling is no
longer doing the job when it comes to AI. It is not just the power of the
components, but the density of those components in the data centre. Unless
servers become three times bigger than they were before, efficient heat
removal is needed.
The Configuration Crisis and Developer Dependency on AI
As our IT infrastructure grows ever more modular, layered and interconnected,
we deal with myriad configurable parts — each one governed by a dense thicket
of settings. All of our computers — whether in our pockets, on our desks or in
the cloud — have a bewildering labyrinth of components with settings to
discover and fiddle with, both individually and in combination. ... A couple
of strategies I’ve mentioned before bear repeating. One is the use of
screenshots, which are now a powerful index in the corpus of synthesized
knowledge. Like all forms of web software, the cloud platforms’ GUI consoles
present a haphazard mix of UX idioms. A maneuver that is conceptually the same
across platforms will often be expressed using very different affordances. ...
A couple of strategies I’ve mentioned before bear repeating. One is the use of
screenshots, which are now a powerful index in the corpus of synthesized
knowledge. Like all forms of web software, the cloud platforms’ GUI consoles
present a haphazard mix of UX idioms. A maneuver that is conceptually the same
across platforms will often be expressed using very different affordances. AIs
are pattern recognizers that can help us see and work with the common
underlying patterns.
From project to product: Architecting the future of enterprise technology
Modern enterprise architecture requires thinking like an urban planner rather
than a building inspector. This means creating environments that enable
innovation while ensuring system integrity and sustainability. ... Just as
urban planners need to develop a shared vocabulary with city officials,
developers and citizens, enterprise architects must establish a common
language that bridges technical and business domains. Complex ideas that
remain purely verbal often get lost or misunderstood. Documentation and
diagrams transform abstract discussions into something tangible. By
articulating fitness functions — automated tests tied to specific quality
attributes like reliability, security or performance — teams can visualize and
measure system qualities that align with business goals. ... Technology
governance alone will often just inform you of capability gaps, tech debt and
duplication — this could be too late! Enterprise architects must shift their
focus to business enablement. This is much more proactive in understanding the
business objectives and planning and mapping the path for delivery. ... Just
as cities must evolve while preserving their essential character, modern
enterprise architecture requires built-in mechanisms for sustainable
change.
Quote for the day:
"Your present circumstances don’t
determine where you can go; they merely determine where you start."
-- Nido Qubein
No comments:
Post a Comment