Prepare now for when quantum computers break biometric encryption: Trust Stamp
While experts expect quantum computers will not be able to scale to defeat such
systems for at least another ten years, the white paper claims, entities should
address “harvest now, decrypt later” (HNDL) attacks proactively. Through an HNDL
approach, an attacker could capture encrypted data pending the availability of
quantum computing-enabled decryption. It is worth noting that this cyber threat
would be heavily resource-intensive to perform. Such an attack would most likely
only be feasible by a nation-state and would target information that would
remain extremely valuable for decades in the future. Still, HDNL is an
especially concerning threat for biometric PII, due to its relative permanence.
Certain data encryption methods are particularly vulnerable. Asymmetric, or
public-key cryptography, uses a public and private key to encrypt and decrypt
information. One of the keys can be stored in the public domain, which enables
connections between “strangers” to be established quickly. Because the keys are
mathematically related, it is theoretically possible to calculate a private key
from a public key.
Managing the hidden risks of shadow APIs
In today's dynamic API landscape, maintaining comprehensive visibility into the
security posture of API endpoints is paramount. All critical app and API
security controls necessary to protect an app's entire ecosystem can be deployed
and managed through the unified API security console of the F5 Distributed Cloud
Platform. This allows DevOps and SecOps teams to observe and quickly identify
suspected API abuse as anomalies are detected as well as create policies to stop
misuse. This requires the use of ML models to create baselines of normal API
usage patterns. Continuous ML-based traffic monitoring allows API security to
predict and block suspicious activity over time. Deviations from these baselines
and other anomalies trigger alerts or automated responses to detect outliers,
including rogue and shadow APIs. Dashboards play a crucial role in providing the
visibility required to monitor and assess the security of APIs. The F5
Distributed Cloud WAAP platform extends beyond basic API inventory management by
presenting essential security information based on actual and attack traffic.
Cybersecurity Frontline: Securing India’s digital finance infrastructure in 2024
Fintech companies are progressively allowing AI to handle routine tasks, freeing
human resources for more complex challenges. AI systems are also being used to
simulate cyberattacks, testing systems for vulnerabilities. This shift
highlights the critical role of AI and ML in modern cybersecurity, moving beyond
mere automation to proactive threat detection and system fortification. The
human element, often the weakest link in cybersecurity, is receiving increased
attention. Fintech firms are investing in employee training to build resilience
against cyberattacks, focusing on areas such as phishing, social engineering,
and password security. One of the most notable advancements in this domain is
the use of AI-powered fraud detection systems. For instance, a global fintech
leader has implemented a deep learning model that analyses around 75 billion
annual transactions across 45 million locations to detect and prevent
card-related fraud. Despite, financial institutions keep on educating the
customers on social engineering frauds, but the challenge is when customers
willingly provide OTPs, payment/banking credentials which resulted misuse in the
account.
The evolving challenge of complexity in cybersecurity
One of the biggest challenges when it comes to cybersecurity is the complexity
that has evolved due to the need to use an increasing array of products and
services to secure our businesses. This is largely due to the underlying
complexity of our IT environments and the broad attack surface this creates.
With the growing adoption of cloud and the more dispersed nature of our
workforces, the perimeter approach to security that worked well in the 20th
century is no longer adequate. In the same way the moats and castle walls of the
Middle Ages gave good protection then but would not stand up to a modern attack,
traditional firewalls and VPNs are no longer suitable now and invariably need to
be augmented with lots of other layers of security tools. Modern, more flexible
and (arguably) simpler zero-trust approaches such as secure access service edge,
zero-trust network access and microsegmentation need to be adopted. These
technologies ensure that access to applications and data, no matter where they
reside, is governed by simple, identity based policies that are easy to manage
while delivering levels of security and visibility that legacy approaches
cannot.
CIOs rise to the ESG reporting challenge
To achieve success, CIOs must first understand how ESG reporting fits within the
company’s business strategy, Sterling’s Kaur says. Then they need to engage and
align with the right people in the organization. The CFO and CSO top that list,
but CIOs should branch out further, as “upstream processes is where the vast
majority of sustainability and ESG story really happens,” says Marsha Reppy, GRC
technology leader for EY Global and EY Americas. “You will not be successful
without procurement, R&D, supply chain, manufacturing, sales, human
resources, legal, and tax at the table.” Because ESG data is broadly dispersed
throughout the organization, CIOs will need broad consensus on an ESG reporting
strategy, but the triumvirate of CIO, CFO, and CHRO should be driving ESG
reporting forward, Kaur says. “Business goals matter, financials matter, and
employee engagement matters,” she says. “Creating this partnership has the
benefit of bringing a cohesive view forward with the right goals.” CIOs must
also educate themselves on the nitty gritty of ESG reporting to fully understand
the complexity and breadth of the problem they’re trying to solve, EY’s Reppy
says.
How to Get Platform Engineering Just Right
In the land of digital transformation, seeing is believing, which is where
observability has a role to play. Improving observability is crucial for gaining
insights into the platform’s performance and behavior, which involves
integrating tools like event and project monitoring, cloud cost transparency,
application performance, infrastructure health and user interactions. In a
rapidly growing cloud environment, observability enables teams to keep track of
what is happening in terms of cost, usage, availability, performance and
security across a constantly transforming cloud infrastructure. Once a project
has been deployed, it needs to be managed and maintained across all cloud
providers, something which is critical for keeping costs to a minimum but is
often a huge and messy task. Managing this effectively requires monitoring key
performance indicators (KPIs) and setting up alerts for critical events, and
using logs and analysis tools to gain visibility into application behavior,
track errors, and troubleshoot issues more effectively. Finally, implementing
tracing systems that can track the flow of requests across various microservices
and components helps to identify performance bottlenecks, understand latency
issues and optimize system behavior.
AI Officer Is the Hot New Job That Pays Over $1 Million
Executives spearheading metaverse efforts at Walt Disney Co., Procter &
Gamble Co. and Creative Artists Agency left. Leon's LinkedIn profile (yes, he
had one), no longer exists, and there's no mention of him on the company's
website, other than his introductory press release. Publicis Groupe declined to
comment on the record. Instead, businesses are scrambling to appoint AI leaders,
with Accenture and GE HealthCare making recent hires. A few metaverse executives
have even reinvented themselves as AI experts, deftly switching from one hot
technology to the next. Compensation packages average well above $1 million,
according to a survey from executive-search and leadership advisory firm
Heidrick & Struggles. Last week, Publicis said it would invest 300 million
euros ($327 million) over the next three years on artificial intelligence
technology and talent.Play Video "It's been a long time since I have had a
conversation with a client about the metaverse," said Fawad Bajwa, the global AI
practice leader at the Russell Reynolds Associates executive search and advisory
firm. "The metaverse might still be there, but it's a lonely place."
Heart of the Matter: Demystifying Copying in the Training of LLMs
A characteristic of generative AI models is the massive consumption of data
inputs, which could consist of text, images, audio files, video files, or any
combination of the inputs (a case usually referred to as “multi-modal”). From a
copyright perspective, an important question (of many important questions) to
ask is whether training materials are retained in the large language model (LLM)
produced by various LLM vendors. To help answer that question, we need to
understand how the textual materials are processed. Focusing on text, what
follows is a brief, non-technical description of exactly that aspect of LLM
training. Humans communicate in natural language by placing words in sequences;
the rules about the sequencing and specific form of a word are dictated by the
specific language (e.g., English). An essential part of the architecture for all
software systems that process text (and therefore for all AI systems that do so)
is how to represent that text so that the functions of the system can be
performed most efficiently. Therefore, a key step in the processing of a textual
input in language models is the splitting of the user input into special “words”
that the AI system can understand.
2024: The year quantum moves past its hype?
By contrast, today’s quantum computers are capable of a just few hundred
error-free operations. This leap may sound like a return to the irrational
exuberance of previous years. But there are many tangible reasons to believe.
The quantum computing industry is now connecting these short-term testbeds with
long-term moonshots as it starts to aim for middle-term, incremental goals. As
we approach this threshold, we’ll start to more intrinsically understand errors
and fix them. We can start to model simple molecules and systems, developing
more powerful quantum algorithms. Then, we can work on more interesting (and
impactful) applications with each new generation/testbed of quantum computer.
What will those applications be? We don’t know. And that’s OK. ... But first we
need to develop better quantum algorithms and QEC techniques. Then, we will need
fewer qubits to run the same quantum calculations and we can unlock useful
quantum computing, sooner. As progress and pace continues to accelerate, 2024
will be the year when the conversation around quantum applications has real
substance as we follow tangible goals, commit to realistic ambitions and unlock
real results.
Adaptive AI: The Promise and Perils for Healthcare CTOs
Adaptive AI is a subset of artificial intelligence that can learn and adjust its
behavior based on new data and changing circumstances. Unlike traditional AI
systems, which are static and rule-based, Adaptive AI algorithms can continually
improve and adapt to evolving situations. This technology draws inspiration from
the human brain's capacity for learning and adaptation. ... Adaptive AI plays a
pivotal role in identifying and mitigating security threats. CTOs can leverage
AI to monitor network traffic continuously, identify anomalies including
software flaws and misconfigurations, and respond to threats in real time,
bolstering their organization's security. It can prioritize these
vulnerabilities based on the potential impact and likelihood of exploitation,
allowing CTOs to allocate resources for patching and remediation efforts
effectively. ... CTOs can drive innovation in customer engagement and
personalization with Adaptive AI algorithms. In the case of virtual healthcare,
Adaptive AI can be used to power virtual care platforms that allow patients to
connect with healthcare providers from anywhere. This can improve access to
care, especially for rural or underserved populations.
Quote for the day:
“Things work out best for those who make
the best of how things work out.” -- John Wooden
No comments:
Post a Comment