Quote for the day:
"Everything you’ve ever wanted is on the other side of fear." -- George Addair
Encryption Backdoors: The Security Practitioners’ View
On the one hand, “What if such access could deliver the means to stop crime, aid
public safety and stop child exploitation?” But on the other hand, “The idea of
someone being able to look into all private conversations, all the data
connected to an individual, feels exposing and vulnerable in unimaginable ways.”
As a security practitioner he has both moral and practical concerns. “Even if
lawful access isn’t the same as mass surveillance, it would be difficult to
distinguish between ‘good’ and ‘bad’ users without analyzing them all.” Morally,
it is a reversal of the presumption of innocence and means no-one can have any
guaranteed privacy. Professionally he says, “Once the encryption can be broken,
once there is a backdoor allowing someone to access data, trust in that vendor
will lessen due to the threat to security and privacy introducing another attack
vector into the equation.” It is this latter point that is the focus for most
security practitioners. “From a practitioner’s standpoint,” says Rob T Lee,
chief of research at SANS Institute and founder at Harbingers, “we’ve seen time
and again that once a vulnerability exists, it doesn’t stay in the hands of the
‘good guys’ for long. It becomes a target. And once it’s exploited, the damage
isn’t theoretical. It affects real people, real businesses, and critical
infrastructure.”Visa CISO Subra Kumaraswamy on Never Allowing Cyber Complacency
Kumaraswamy is always thinking about talent and technology in cybersecurity.
Talent is a perennial concern in the industry, and Visa is looking to grow its
own. The Visa Payments Learning Program, launched in 2023, aims to help close
the skills gap in cyber through training and certification. “We are offering
this to all of the employees. We’re offering it to our partners, like the banks,
our customers,” says Kumaraswamy. Right now, Visa leverages approximately 115
different technologies in cyber, and Kumaraswamy is constantly evaluating where
to go next. “How do I [get to] the 116th, 117th, 181th?” he asks. ”That needs to
be added because every layer counts.” Of course, GenAI is a part of that
equation. Thus far, Kumaraswamy and his team are exploring more than 80
different GenAI initiatives within cyber. “We’ve already taken about three to
four of those initiatives … to the entire company. That includes the what we
call a ‘shift left’ process within Visa. It is now enabled with agentic AI. It’s
reducing the time to find bugs in the code. It is also helping reduce the time
to investigate incidents,” he shares. Visa is also taking its best practices in
cybersecurity and sharing them with their customers. “We can think of this as
value-added services to the mid-size banks, the credit unions, who don’t have
the scale of Visa,” says Kumaraswamy.
Agentic AI in automotive retail: Creating always-on sales teams
To function effectively, digital agents need memory. This is where memory
modules come into play. These components store key facts about ongoing
interactions, such as the customer’s vehicle preferences, budget, and previous
questions. For instance, if a returning visitor had previously shown interest in
SUVs under a specific price range, the memory module allows the AI to recall
that detail. Instead of restarting the conversation, the agent can pick up where
it left off, offering an experience that feels personalised and informed. Memory
modules are critical for maintaining consistency across long or repeated
interactions. Without them, agentic AI would struggle to replicate the attentive
service provided by a human salesperson who remembers returning customers. ...
Despite the intelligence of agentic AI, there are scenarios where human
involvement is still needed. Whether due to complex financing questions or
emotional decision-making, some buyers prefer speaking to a person before
finalizing their decision. A well-designed agentic system should recognize when
it has reached the limits of its capabilities. In such moments, it should
facilitate a handover to a human representative. This includes summarizing the
conversation so far, alerting the sales team in real-time, and scheduling a
follow-up if required.Multicloud explained: Why it pays to diversify your cloud strategy
If your cloud provider were to suffer a massive and prolonged outage, that would
have major repercussions on your business. While that’s pretty unlikely if you
go with one of the hyperscalers, it’s possible with a more specialized vendor.
And even with the big players, you may discover annoyances, performance
problems, unanticipated charges, or other issues that might cause you to rethink
your relationship. Using services from multiple vendors makes it easier to end a
relationship that feels like it’s gone stale without you having to retool your
entire infrastructure. It can be a great means to determine which cloud
providers are best for which workloads. And it can’t hurt as a negotiating
tactic when contracts expire or when you’re considering adding new cloud
services. ... If you add more cloud resources by adding services from a
different vendor, you’ll need to put in extra effort to get the two clouds to
play nicely together, a process that can range from “annoying” to “impossible.”
Even after bridging the divide, there’s administrative overhead involved—it’ll
be harder to keep tabs on data protection and privacy, for instance, and you’ll
need to track cloud usage and the associated costs for multiple vendors. Network
bandwidth. Many vendors make it cheap and easy to move data to and within their
cloud, but might make you pay a premium to export it.
Decentralized Architecture Needs More Than Autonomy
4 new studies about agentic AI from the MIT Initiative on the Digital Economy
In their study, Aral and Ju found that human-AI pairs excelled at some tasks
and underperformed human-human pairs on others. Humans paired with AI were
better at creating text but worse at creating images, though campaigns from
both groups performed equally well when deployed in real ads on social media
site X. Looking beyond performance, the researchers found that the actual
process of how people worked changed when they were paired with AI .
Communication (as measured by messages sent between partners) increased for
human-AI pairs, with less time spent on editing text and more time spent on
generating text and visuals. Human-AI pairs sent far fewer social messages,
such as those typically intended to build rapport. “The human-AI teams focused
more on the task at hand and, understandably, spent less time socializing,
talking about emotions, and so on,” Ju said. “You don’t have to do that with
agents, which leads directly to performance and productivity improvements.” As
a final part of the study, the researchers varied the assigned personality of
the AI agents using the Big Five personality traits: openness,
conscientiousness, extraversion, agreeableness, and neuroticism. The AI
personality pairing experiments revealed that programming AI personalities to
complement human personalities greatly enhanced collaboration.
DevOps Backup: Top Reasons for DevOps and Management
How AI can save us from our 'infinite' workdays, according to Microsoft
Activity is not the same as progress. What good is work if it's just busy work
and not tackling the right tasks or goals? Here, Microsoft advises adopting
the Pareto Principle, which postulates that 20% of the work should deliver 80%
of the outcomes. And how does this involve AI? Use AI agents to handle
low-value tasks, such as status meetings, routine reports, and administrative
churn. That frees up employees to focus on deeper tasks that require the human
touch. For this, Microsoft suggested watching the leadership keynote from the
Microsoft 365 Community Conference on Building the Future Firm. ... Instead of
using an org chart to delineate roles and responsibilities, turn to a work
chart. A work chart is driven more by outcome, in which teams are organized
around a specific goal. Here, you can use AI to fill in some of the gaps,
again freeing up employees for more in-depth work. ... Finally, Microsoft
pointed to a new breed of professionals known as agent bosses. They handle the
infinite workday not by putting in more hours but by working smarter. One
example cited in the report is Alex Farach, a researcher at Microsoft. Instead
of getting swamped in manual work, Farach uses a trio of AI agents to act as
his assistants. One collects daily research. The second runs statistical
analysis. And the third drafts briefs to tie all the data together.
Data Governance and AI Governance: Where Do They Intersect?
AIG and DG share common responsibilities in guiding data as a product that
AI systems create and consume, despite their differences. Both governance
programs evaluate data integration, quality, security, privacy, and
accessibility. For instance, both governance frameworks need to ensure
quality information meets business needs. If a major retailer discovered
their AI-powered product recommendation engine was suggesting irrelevant
items to customers, then DG and AIG would want the issue resolved. However,
either approach or a combination could be best to solving the problem.
Determining the right governance response requires analyzing the root issue.
... DG and AIG provide different approaches; which works best depends on the
problem. Take the example, above, of the inaccurate pricing information to a
customer in response to a query. The data governance team audits the product
data pipeline and finds inconsistent data standards and missing attributes
feeding into the AI model. However, the AI governance team also identifies
opportunities to enhance the recommendation algorithm’s logic for weighting
customer preferences. The retailer could resolve the data quality issues
through DG while AIG improved the AI model’s mechanics by taking a
collaborative approach with both data governance and AI governance
perspectives. Deepfake Rebellion: When Employees Become Targets
Surviving and mitigating such an attack requires moving beyond purely
technological solutions. While AI detection tools can help, the first and most
critical line of defense lies in empowering the human factor. A resilient
organization builds its bulwarks on human risk management and security awareness
training, specifically tailored to counter the mental manipulation inherent in
deepfake attacks. Rapidly deploy trained ambassadors. These are not IT security
personnel, but respected peers from diverse departments trained to coach
workshops. ... Leadership must address employees first, acknowledge the
incident, express understanding of the distress caused, and unequivocally state
the deepfake is under investigation. Silence breeds speculation and distrust.
There should be channels for employees to voice concerns, ask questions, and
access support without fear of retribution. This helps to mitigate panic and
rebuild a sense of community. Ensure a unified public response, coordinating
Comms, Legal, and HR. ... The antidote to synthetic mistrust is authentic trust,
built through consistent leadership, transparent communication, and demonstrable
commitment to shared values. The goal is to create an environment where
verification habits are second nature. It’s about discerning malicious
fabrication from human error or disagreement.
No comments:
Post a Comment