AI Regulation Readiness: A Guide for Businesses
The first thing to note about AI compliance today is that few laws and other
regulations are currently on the books that impact the way businesses use AI.
Most regulations designed specifically for AI remain in draft form. That said,
there are a host of other regulations — like the General Data Protection
Regulation (GDPR), the California Privacy Rights Act (CPRA), and the Personal
Information Protection and Electronic Documents Act (PIPEDA) — that have
important implications for AI. These compliance laws were written before the
emergence of modern generative AI technology placed AI onto the radar screens
of businesses (and regulators) everywhere, and they mention AI sparingly if at
all. But these laws do impose strict requirements related to data privacy and
security. Since AI and data go hand-in-hand, you can't deploy AI in a
compliant way without ensuring that you manage and secure data as current
regulations require. This is why businesses shouldn't think of AI as an
anything-goes space due to the lack of regulations focused on AI specifically.
Effectively, AI regulations already exist in the form of data privacy
rules.
Cloud vs. On-Prem AI Accelerators: Choosing the Best Fit for Your AI Workloads
Like most types of hardware, AI accelerators can run either on-prem or in the
cloud. An on-prem accelerator is one that you install in servers you manage
yourself. This requires you to purchase the accelerator and a server capable
of hosting it, set them up, and manage them on an ongoing basis. A cloud-based
accelerator is one that a cloud vendor makes available to customers over the
internet using an IaaS model. Typically, to access a cloud-based accelerator,
you'd choose a cloud server instance designed for AI. For example, Amazon
offers EC2 cloud server instances that feature its Trainium AI accelerator
chip. Google Cloud offers Tensor Processing Units (TPUs), another type of AI
accelerator, as one of its cloud server options. ... Some types of AI
accelerators are only available through the cloud. For instance, you can't
purchase the AI chips developed by Amazon and Google for use in your own
servers. You have to use cloud services to access them. ... Like most
cloud-based solutions, cloud AI hardware is very scalable. You can easily add
more AI server instances if you need more processing power. This isn't the
case with on-prem AI hardware, which is costly and complicated to scale up.
Platform Engineering Is The New DevOps
Platform engineering has provided a useful escape hatch at just the right
time. Its popularity has grown strongly, with a well-attended inaugural
platform engineering day at KubeCon Paris in early 2024 confirming attendee
interest. A platform engineering day was part of the KubeCon NA schedule this
past week and will also be included at next year’s KubeCon in London. “I
haven't seen platform engineering pushed top down from a C-suite. I've seen a
lot of guerilla stuff with platform and ops teams just basically going out and
doing a skunkworks thing and sneaking it into production and then making a
value case and growing from there,” said Keith Babo, VP of product and
marketing at Solo.io. ... “If anyone ever asks me what’s my definition of
platform engineering, I tend to think of it as DevOps at scale. It’s how
DevOps scales,” says Kennedy. The focus has shifted away from building cloud
native technology, done by developers, to using cloud native technology, which
is largely the realm of operations. That platform engineering should start to
take over from DevOps in this ecosystem may not be surprising, but it does
highlight important structural shifts.
Artificial Intelligence and Its Ascendancy in Global Power Dynamics
According to the OECD, AI is defined as “a machine-based system that can, for
a given set of human-defined objectives, make predictions, recommendations, or
decisions that influence real or virtual environments.” The vision for
Responsible AI is clear: establish global auditing standards, ensure
transparency, and protect privacy through secure data governance. Yet,
achieving Responsible AI requires more than compliance checklists; it demands
proactive governance. For example, the EU’s AI Act takes a hardline approach
to regulating high-risk applications like real-time biometric surveillance and
automated hiring processes, whereas the U.S., under President Biden’s
Executive Order on Safe, Secure, and Trustworthy AI, emphasizes guidelines
over strict enforcement. ... AI is becoming the lynchpin of cybersecurity and
national security strategies. State-backed actors from China, Iran, and North
Korea are weaponizing AI to conduct sophisticated cyber-attacks on critical
infrastructure. The deployment of Generative Adversarial Networks (GANs) and
WormGPT is automating cyber operations at scale, making traditional defenses
increasingly obsolete. In this context, a cohesive, enforceable framework for
AI governance is no longer optional but essential.
Why voice biometrics is a must-have for modern businesses
Voice biometrics are making waves across multiple industries. Here’s a look at
how different sectors can leverage this technology for a competitive
edge:Financial services: Banks and financial institutions are actively
integrating voice verification into call centers, allowing customers to
authenticate themselves with their voice, eliminating the need for secret
words or pin codes. This strengthens security, reduces time and cost per
customer call and enhances the customer experience. Automotive: With the rise
of connected vehicles, voice is already heavily used with integrated digital
assistants that provide handsfree access to in-car services like navigation,
settings and communications. Adding voice recognition allows such in car
services to be personalized for the driver and opens the possibilities of more
enhancements such as commerce. Automotive brands can integrate voice
recognition for offering seamless access to new services like parking,
fueling, charging, curbside pick-up by utilizing in-car payments that boost
security, convenience and customer satisfaction. Healthcare: Healthcare
providers can use voice authentication to securely verify patient identities
over the phone or via telemedicine. This ensures that sensitive information
remains protected, while providing a seamless experience for patients who may
need hands-free options.
When and Where to Rate-Limit: Strategies for Hybrid and Legacy Architectures
While rate-limiting is an essential tool for protecting your system from
traffic overloads, applying it directly at the application layer — whether
for microservices or legacy applications — is often a suboptimal strategy.
... Legacy systems operate differently. They often rely on vertical scaling
and have limited flexibility to handle increased loads. While it might seem
logical to apply rate-limiting directly to protect fragile legacy systems,
this approach usually falls short. The main issue with rate-limiting at
the legacy application layer is that it’s reactive. By the time
rate-limiting kicks in, the system might already be overloaded. Legacy
systems, lacking the scalability and elasticity of microservices, are more
prone to total failure under high load, and rate-limiting at the application
level can’t stop this once the traffic surge has already reached its peak.
... Rate-limiting should be handled further upstream rather than deep in the
application layer, where it either conflicts with scalability (in
microservices) or arrives too late to prevent failures. This leads us to the
API gateway, the strategic point in the architecture where traffic control
is most effective.
Survey Surprise: Quantum Now in Action at Almost One-Third of Sites
The use cases for quantum — scientific research, complex simulations —
have been documented for a number of years. However, with the arrival of
artificial intelligence, particularly generative AI, on the scene, quantum
technology may start finding more mainstream business use cases. In a
separate
report
out of Sogeti (a division of Capgemini Group), Akhterul Mustafa calls an
impending mashup of generative AI and quantum computing as the “tech
world’s version of a dream team, not just changing the game but also
pushing the boundaries of what we thought was possible.” ... The
convergence of generative AI and quantum computing beings “some pretty
epic perks,” Mustafa states. For example, it enables the supercharging of
AI models. “Training AI models is a beastly task that needs tons of
computing power. Enter quantum computers, which can zip through complex
calculations, potentially making AI smarter and faster.” In addition,
“quantum computers can sift through massive datasets in a blink. Pair that
with generative AI’s knack for cooking up innovative solutions, and you’ve
got a recipe for solving brain-bending problems in areas like health,
environment, and beyond.”
How Continuous Threat Exposure Management (CTEM) Helps Your Business
A CTEM framework typically includes five phases: identification,
prioritization, mitigation, validation, and reporting and improvement. In
the first phase, systems are continuously monitored to identify new or
emerging vulnerabilities and potential attack vectors. This continuous
monitoring is essential to the vulnerability management lifecycle.
Identified vulnerabilities are then assessed based on their potential
impact on critical assets and business operations. In the mitigation
phase, action is taken to defend against high-risk vulnerabilities by
applying patches, reconfiguring systems or adjusting security controls.
The validation stage focuses on testing defenses to ensure vulnerabilities
are properly mitigated and the security posture remains strong. In the
final phase of reporting and improvement, IT leaders gain access to
security metrics and improved defense routes, based on lessons learned
from incident response. ... While both CTEM and vulnerability management
aim to identify and remediate security weaknesses, they differ in scope
and execution. Vulnerability management is more about targeted and
periodic identification of vulnerabilities within an organization based on
a set scan window.
DevOps in the Cloud: Leveraging Cloud Services for Optimal DevOps Practices
A well-designed DevOps transformation strategy can help organizations
deliver software products and their services quickly and reliably while
improving the overall efficiency of their development and delivery
processes. ... Cloud platforms facilitate the immediate provisioning of
infrastructure components, including servers, storage units, and
databases. This helps teams swiftly initiate new development and testing
environments, hastening the software development lifecycle. Companies can
see a significant decrease in infrastructure provisioning time by
integrating cloud services. ... DevOps helps development and operations
teams work together. Cloud platforms provide a central place for storing
code, configurations, and important files so everyone can be on the same
page. Additionally, cloud-based communication and collaboration tools
streamline communication and break down silos between teams. ... Cloud
services provide a pay-as-you-go system, so there is no need for a large
upfront investment in hardware. This way, companies can scale their
infrastructure according to their requirements, saving a lot of
money.
Reinforcement learning algorithm provides an efficient way to train more reliable AI agents
To boost the reliability of reinforcement learning models for complex
tasks with variability, MIT researchers have introduced a more efficient
algorithm for training them. The findings are published on the arXiv
preprint server. The algorithm strategically selects the best tasks for
training an AI agent so it can effectively perform all tasks in a
collection of related tasks. In the case of traffic signal control, each
task could be one intersection in a task space that includes all
intersections in the city. By focusing on a smaller number of
intersections that contribute the most to the algorithm's overall
effectiveness, this method maximizes performance while keeping the
training cost low. The researchers found that their technique was between
five and 50 times more efficient than standard approaches on an array of
simulated tasks. This gain in efficiency helps the algorithm learn a
better solution in a faster manner, ultimately improving the performance
of the AI agent. "We were able to see incredible performance improvements,
with a very simple algorithm, by thinking outside the box. An algorithm
that is not very complicated stands a better chance of being adopted by
the community because it is easier to implement and easier for others to
understand,"
Quote for the day:
"Too many of us are not living our dreams because we are living our
fears." -- Les Brown
No comments:
Post a Comment