Quantinuum Scientists Successfully Teleport Logical Qubit With Fault Tolerance And Fidelity
This research advances quantum computing by making teleportation a reliable tool for quantum systems. Teleportation is essential in quantum algorithms and network designs, particularly in systems where moving qubits physically is difficult or impossible. By implementing teleportation in a fault-tolerant manner, Quantinuum’s research brings the field closer to practical, large-scale quantum computing systems. The fidelity of the teleportation also suggests that future quantum networks could reliably transmit quantum states over long distances, enabling new forms of secure communication and distributed quantum computing. The use of QEC in these experiments is especially promising, as error correction is one of the key challenges in making quantum computing scalable. Without fault tolerance, quantum states are prone to errors caused by environmental noise, making complex computations unreliable. The fact that Quantinuum achieved high fidelity using real-time QEC demonstrates the increasing maturity of its hardware and software systems.
Adversarial attacks on AI models are rising: what should you do now?
Adversarial attacks on ML models look to exploit gaps by intentionally
attempting to redirect the model with inputs, corrupted data, jailbreak
prompts and by hiding malicious commands in images loaded back into a model
for analysis. Attackers fine-tune adversarial attacks to make models deliver
false predictions and classifications, producing the wrong output. ...
Disrupting entire networks with adversarial ML attacks is the stealth attack
strategy nation-states are betting on to disrupt their adversaries’
infrastructure, which will have a cascading effect across supply chains. The
2024 Annual Threat Assessment of the U.S. Intelligence Community provides a
sobering look at how important it is to protect networks from adversarial ML
model attacks and why businesses need to consider better securing their
private networks against adversarial ML attacks. ... Machine learning models
can be manipulated without adversarial training. Adversarial training uses
adverse examples and significantly strengthens a model’s defenses. Researchers
say adversarial training improves robustness but requires longer training
times and may trade accuracy for resilience.
4 ways to become a more effective business leader
Delivering quantitative results isn't the only factor that defines effective
leaders -- great managers also possess the right qualitative skills, including
being able to communicate and collaborate with their peers. "Once you reach
that higher level in the business, particularly if you are part of the
executive committee, you need to know how to deal with corporate politics,"
said Vogel. Managers must recognize that underlying corporate politics can be
made with social motivations in mind. Great leaders see the signs. "If you're
unable to read the room and understand and navigate that context, it's going
to be tough," said Vogel. ... The rapid pace of change in modern organizations
represents a huge challenge for all business leaders. Vogel instructed
would-be executives to keep learning. "Especially at the moment, and the world
we work in, you need to upskill yourself," she said. "We have had so much
change happening in the business."Vogel said technology is a key factor in the
rapid pace of change. The past two years have seen huge demands for Gen AI and
machine learning. In the future, technological innovations around blockchain,
quantum computing, and robotics will lead to more pressure for digital
transformation.
Cloud architects: Try thinking like a CFO
Cloud architects must cut through the hype and focus on real-world
applications and benefits. More than mere technological enhancement is
required; architects must make a clear financial case. This is particularly
apt in environments where executive decision-makers demand justification for
every technology dollar spent. Aligning cloud architecture strategies with
business outcomes requires architects to step beyond traditional roles and
strategically engage with critical financial metrics. For example, reducing
operational expenses through efficient cloud resource management will directly
impact a company’s bottom line. A successful cloud architect will provide CFOs
with predictive analytics and cost-saving projections, demonstrating clear
business value and market advantage. Moreover, the increasing pressure on
businesses to operate sustainably allows architects to leverage the cloud’s
potential for greener operations. These are often strategic wins that CFOs can
directly appreciate in terms of corporate financial and social governance
metrics. However, when I bring up the topic of sustainability, I receive a lot
of nods, but few people seem to care.
Wherever There's Ransomware, There's Service Account Compromise. Are You Protected?
Most service accounts are created to access other machines. That inevitably
implies that they have the required access privileges to log-in and execute
code on these machines. This is exactly what threat actors are after, as
compromising these accounts would render them the ability to access and
execute their malicious payload. ... Some service accounts, especially those
that are associated with an installed on-prem software, are known to the IT
and IAM staff. However, many are created ad-hoc by IT and identity personnel
with no documentation. This makes the task of maintaining a monitored
inventory of service accounts close to impossible. This plays well in
attackers' hands as compromising and abusing an unmonitored account has a far
greater chance of going undetected by the attack's victim. ... The common
security measures that are used for the prevention of account compromise are
MFA and PAM. MFA can't be applied to service accounts because they are not
human and don't own a phone, hardware token, or any other additional factor
that can be used to verify their identity beyond their username and passwords.
PAM solutions also struggle with the protection of service accounts.
Datacenters bleed watts and cash – all because they're afraid to flip a switch
The good news is CPU vendors have developed all manner of techniques for
managing power and performance over the years. Many of these are rooted in
mobile applications, where energy consumption is a far more important metric
than in the datacenter. According to Uptime, these controls can have a major
impact on system power consumption and don't necessarily have to kneecap the
chip by limiting its peak performance. The most power efficient of these
regimes, according to Uptime, are software-based controls, which have the
potential to cut system power consumption by anywhere from 25 to 50 percent –
depending on how sophisticated the operating system governor and power plan
are. However, these software-level controls also have the potential to impart
the biggest latency hit. This potentially makes these controls impractical for
bursty or latency-sensitive jobs. By comparison, Uptime found that
hardware-only implementations designed to set performance targets tend to be
far faster when switching between states – which means a lower latency hit.
The trade-off is the power savings aren't nearly as impressive, topping out
around ten percent.
An AI-Driven Approach to Risk-Scoring Systems in Cybersecurity
The integration of AI into risk-scoring systems also enhances the overall
security strategy of an organization. These systems are not static, but rather
learn and adapt over time, becoming increasingly effective as they encounter
new threat patterns and scenarios. This adaptive capability is crucial in the
face of rapidly evolving cyber threats, allowing organizations to stay one
step ahead of potential attackers. An example of this in action is detecting
anomalies during user sign-on by analyzing physical attributes and comparing
them to typical behavior patterns. ... It's important, however, to realize
that AI is not a cure-all for every cybersecurity challenge. The most
impactful strategies combine the analytical power of AI with human expertise.
While AI excels at processing vast amounts of data and identifying patterns,
human analysts provide critical contextual understanding and decision-making
capabilities. It's crucial for AI systems to continuously learn from the input
of small and medium-sized enterprises (SMEs) through a feedback loop to refine
their accuracy and minimize alert fatigue; this collaboration between human
and artificial intelligence creates a robust defense against a wide range of
cyber threats.
API Security in Financial Services: Navigating Regulatory and Operational Challenges
API breaches can have devastating consequences, including data loss, brand
damage, financial losses, and customer attrition. For example, a breach that
exposes customer account information can lead to financial theft and identity
fraud. The reputational damage from such incidents can result in loss of
customer trust and increased scrutiny from regulators. Institutions must
recognize the potential fallout from breaches and take proactive steps to
mitigate these risks, understanding that the cost of breaches often far
exceeds the investment in robust security measures. ... Common security
controls such as encryption, data loss prevention, and web application
firewalls are widely used, yet their effectiveness remains limited. The report
indicates that 45% of financial institutions can only prevent half or fewer
API attacks, underscoring the need for improved security strategies and tools.
Encryption, while essential, only protects data at rest and in transit,
leaving APIs vulnerable to other types of attacks like injection and
denial-of-service. Further, data loss prevention systems often struggle to
keep pace with the volume and complexity of API traffic.
Guide To Navigating the Legal Perils After a Cyber Incident
Cyber incidents pose significant technical challenges, but the real storm
often hits after the breach gets contained, Nall said. That’s when regulators
step in to scrutinize every decision made in the heat of the crisis. While
scrutiny has traditionally focused on corporate leadership or legal
departments, today, infosec workers risk facing charges of fraud, negligence,
or worse, simply for doing their jobs. ... Instead of clear, universal
cybersecurity standards, regulatory bodies like the SEC only define acceptable
practices after a breach occurs, Nall said. This reactive approach puts CISOs
and other infosec workers at a distinct disadvantage. "Federal prosecutors and
SEC attorneys read the paper like anyone else, and when they see bad things
happening, like major breaches, especially where there is a delay in
disclosure, they have to go after those companies," Nall explained during her
presentation. ... Fortunately, CISOs and other infosec workers can take
several concrete steps to protect their careers and reputations. By
implementing airtight communication practices and negotiating solid legal
protections, they can navigate the fallout of a disastrous cyber
incident.
As the AI Bubble Deflates, the Ethics of Hype Are in the Spotlight
One of the major problems we’re seeing right now in the AI industry is the
overpromising of what AI tools can actually do. There’s a huge amount of
excitement around AI’s observational capacities, or the notion that AI can see
things that are otherwise unobservable to the human eye due to these tools’
ability to discern trends from huge amounts of data. However, these
observational capacities are not only overstated, but also often completely
misleading. They lead to AI being attributed almost magical powers, whereas in
reality a large number of AI products grossly underperform compared to what
they’re promised to do. ... So, the true believers caught up in the promises
and excitement are likely to be disappointed. But throughout the hype cycle,
many notable figures including practitioners and researchers have challenged
narratives about the unconstrained transformational potential of AI. Some have
expressed alarm at the mechanisms, techniques, and behavior at play which
allowed such unbridled fervour to override the healthy caution necessary ahead
of the deployment of any emerging technology, especially one which has the
potential for such massive societal, environmental, and social upheaval.
Quote for the day:
“Start each day with a positive
thought and a grateful heart.” -- Roy T. Bennett
No comments:
Post a Comment