The tech tightrope: safeguarding privacy in an AI-powered world
The only means of truly securing our privacy is through proactive enforcement of
the utmost secure and novel technological measures at our disposal, those that
ensure a strong emphasis on privacy and data encryption, while still enabling
breakthrough technologies such as generative AI models and cloud computing tools
full access to large pools of data in order to meet their full potential.
Protecting data when it is at rest (i.e., in storage) or in transit (i.e.,
moving through or across networks) is ubiquitous. The data is encrypted, which
is generally enough to ensure that it remains safe from unwanted access. The
overwhelming challenge is how to also secure data while it is in use. ... One
major issue with Confidential Computing is that it cannot scale sufficiently to
cover the magnitude of use cases necessary to handle every possible AI model and
cloud instance. Because a TEE must be created and defined for each specific use
case, the time, effort, and cost involved in protecting data is restrictive. The
bigger issue with Confidential Computing, though, is that it is not foolproof.
The data in the TEE must still be unencrypted for it to be processed, opening
the potential for quantum attack vectors to exploit vulnerabilities in the
environment.
Ethical Considerations in AI Development
As part of its digital strategy, the EU wants to regulate artificial
intelligence (AI) to guarantee better conditions for the development and use of
this innovative technology. Parliament’s priority is to ensure that AI systems
used in the EU are secure, transparent, traceable, non-discriminatory, and
environmentally friendly. AI systems must be overseen by people, rather than
automation, to avoid harmful outcomes. The European Parliament also wants to
establish a uniform and technologically neutral definition of AI that can be
applied to future AI systems. “It is a pioneering law in the world,” highlighted
Von Der Leyen, who celebrates that AI can thus be developed in a legal framework
that can be “trusted.” The institutions of the European Union have agreed on the
artificial intelligence law that allows or prohibits the use of technology
depending on the risk it poses to people and that seeks to boost the European
industry against giants such as China and the United States. The pact was
reached after intense negotiations in which one of the sensitive points has been
the use that law enforcement agencies will be able to make of biometric
identification cameras to guarantee national security and prevent crimes such as
terrorism or the protection of infrastructure.
FBI and CISA warn government systems against increased DDoS attacks
The advisory has grouped typical DoS and DDoS attacks based on three technique
types: volume-based, protocol-based, and application layer-based. While
volume-based attacks aim to cause request fatigue for the targeted systems,
rendering them unable to handle legitimate requests, protocol-based attacks
identify and target the weaker protocol implementations of a system causing it
to malfunction. A novel loop DoS attack reported this week targeting network
systems, using weak user datagram protocol (UDP)-based communications to
transmit data packets, is an example of a protocol-based DoS attack. This new
technique is among the rarest instances of a DoS attack, which can potentially
result in a huge volume of malicious traffic. Application layer-based attacks
refer to attacks that exploit vulnerabilities within specific applications or
services running on the target system. Upon exploiting the weaknesses in the
application, the attackers find ways to over-consume the processing powers of
the target system, causing them to malfunction. Interestingly, the loop DoS
attack can also be placed within the application layer DoS category, as it
primarily attacks the communication flaw in the application layer resulting from
its dependency on the UDP transport protocol.
The Future of AI: Hybrid Edge Deployments Are Indispensable
Deploying AI models locally eliminates dependence on external network
connections or remote servers, minimizing the risk of downtime caused by
maintenance, outages or connectivity issues. This level of resilience is
particularly critical in sectors like healthcare and other sensitive industries
where uninterrupted service is absolutely critical. Edge deployments also ensure
“low latency,” as the speed of light is a fundamental limiting factor, and there
may be significant latency when accessing cloud infrastructure. With
increasingly powerful hardware available at the edge, it enables the processing
of data that is physically nearby. Another benefit is the ability to harness
specialized hardware that is tailored to their needs, optimizing performance and
efficiency while bypassing network latency and bandwidth limitations, as well as
configuration constraints imposed by cloud providers. Lastly, edge deployments
allow for the centralization of large shared assets within a secure environment,
which in turn simplifies storage management and access control, enhancing data
security and governance.
OpenTelemetry promises run-time "profiling" as it guns for graduation
This means engineers will be able “to correlate resource exhaustion or poor user
experience across their services with not just the specific service or pod being
impacted, but the function or line of code most responsible for it.” i.e. They
won't just know when something falls down, but why; something commercial
offerings can provide but the project has lacked. OpenTelemetry governance
committee member, Daniel Gomez Blanco, principal software engineer at
Skyscanner, added the advances in profiling raised new challenges, such as how
to represent user sessions, and how are they tied into resource attributes, as
well as how to propagate context from the client side, to the back end, and back
again. As a result it has formed a new specialist interest group to tackle these
challenges. Honeycomb.io director of open source Austin Parker, said: “We're
right along the glide path in order to continue to grow as a mature project.” As
for the graduation process, he said, the security audits will continue over the
summer along with work on best practices, audits and remediation. They should
complete in the fall: “We'll publish results along these lines, and fixes ,and
then we're gonna have a really cool party in Salt Lake City probably.”
Fake data breaches: Countering the damage
Fake data breaches can hurt an organization’s security reputation, even if it
quickly debunks the fake breach. Whether real or fake, news of a potential
breach can create panic among employees, customers, and other stakeholders. For
publicly traded companies, the consequences can be even more damaging as such
rumors can degrade a company’s stock value. Fake breaches also have direct
financial consequences. Investigating a fake breach consumes time, money, and
security personnel. Time spent on such investigations can mean time away from
mitigating real and critical security threats, especially for SMBs with limited
resources. Some cybercriminals might deliberately create panic and confusion
about a fake breach to distract security teams from a different, real attack
they might be trying to launch. Fake data breaches can help them gauge the
response time and protocols an organization may have in place. These insights
can be valuable for future, more severe attacks. In this sense, a fake data
breach may well be a “dry run” and an indicator of an upcoming cyber-attack.
CISOs: Make Sure Your Team Members Fit Your Company Culture
Cybersecurity is not a solitary endeavor; it's a collective fight against common
adversaries. CISOs can enhance their teams' capabilities by fostering
collaboration both within the organization and with external communities.
Internally, promoting a security-aware culture across all departments can
empower employees to be the first line of defense. Externally, participating in
industry forums, sharing threat intelligence with peers and engaging in
public-private partnerships can provide access to shared resources, insights and
best practices. These collaborations can extend a team's reach and effectiveness
beyond its immediate members. Diversifying recruitment efforts can help uncover
untapped talent pools. Initiatives aimed at increasing the participation of
underrepresented groups in cybersecurity, such as women and veterans, can
broaden the range of candidates. CISOs should also look beyond traditional
recruitment channels and explore alternative sources such as hackathons,
cybersecurity competitions and online communities.
Architecting for High Availability in the Cloud with Cellular Architecture
Cellular architecture is a design pattern that helps achieve high availability
in multi-tenant applications. The goal is to design your application so that you
can deploy all of its components into an isolated "cell" that is fully
self-sufficient. Then, you create many discrete deployments of these "cells"
with no dependencies between them. Each cell is a fully operational, autonomous
instance of your application ready to serve traffic with no dependencies on or
interactions with any other cells. Traffic from your users can be distributed
across these cells, and if an outage occurs in one cell, it will only impact the
users in that cell while the other cells remain fully operational. ... one of
the goals of cellular architecture is to minimize the blast radius of outages,
and one of the most likely times that an outage may occur is immediately after a
deployment. So, in practice, we’ll want to add a few protections to our
deployment process so that if we detect an issue, we can stop deploying the
changes until we’ve resolved it. To that end, adding a "staging" cell that we
can deploy to first and a "bake" period between deployments to subsequent cells
is a good idea.
Swift promotes the concept of a universal shared ledger. But based on messaging
While many of Swift’s points are perfectly valid, in our view, this demonstrates
the classic conundrum of how incumbents respond to innovation. Swift could make
sense as the operator of some of these shared ledgers. Likewise, incumbent
central depositories (CSDs) might be the logical operators for securities
ledgers. ... “By leveraging existing components of the financial system that
already work well together – including secure financial messaging such as that
provided by Swift – the industry can avoid undue levels of market concentration
risk, and draw upon tried-and-tested practices to deliver the rich, structured
data that it has been working towards for decades.” It continues, “Rather than
having each institution record its own individual ‘state’, that function could
be abstracted and performed at an industry level, similar to how messaging
evolved. Such a state machine could be built on more decentralised blockchain
technology, or equally a more centralised platform like Swift’s Transaction
Manager could be enhanced for this use.”
The AI Advantage: Mitigating the Security Alert Deluge in a Talent-Scarce Landscape
Security teams are still struggling with an overflow of alerts. The report found
that an average of 9,854 false positives arise weekly, wasting valuable time and
resources as analysts investigate these non-issues. Moreover, undetected threats
present an even more significant concern. The average organization fails to
identify a staggering 12,009 threats each week, leaving vulnerabilities exposed.
Imagine this: you’re a cybersecurity analyst tasked with safeguarding your
organization’s attack surface. But instead of strategically deploying defenses,
you’re buried under an avalanche of security alerts. Thousands of alerts bombard
your console daily, a relentless barrage threatening to consume your entire
workday. This overwhelming volume is the reality for many security analysts.
While security tools play a crucial role in detection, they often generate many
false positives – harmless activities mistaken for threats. These false alarms
are like smoke detectors going off whenever you toast a bagel, forcing you to
waste time investigating non-issues. The consequences are dire, as exhausted
analysts are more likely to miss genuine threats amidst the noise.
AWS CISO: Pay Attention to How AI Uses Your Data
AI users always need to think about whether they're getting quality responses.
The reason for security is for people to trust their computer systems. If you're
putting together this complex system that uses a generative AI model to deliver
something to the customer, you need the customer to trust that the AI is giving
them the right information to act on and that it's protecting their information.
... With strong foundations already in place, AWS was well prepared to step up
to the challenge as we've been working with AI for years. We have a large number
of internal AI solutions and a number of services we offer directly to our
customers, and security has been a major consideration in how we develop these
solutions. It's what our customers ask about, and it's what they expect. As one
of the largest-scale cloud providers, we have broad visibility into evolving
security needs across the globe. The threat intelligence we capture is
aggregated and used to develop actionable insights that are used within customer
tools and services such as GuardDuty. In addition, our threat intelligence is
used to generate automated security actions on behalf of customers to keep their
data secure.
Quote for the day:
"Great leaders do not desire to lead but
to serve." -- Myles Munroe
No comments:
Post a Comment