Quote for the day:
"Everything you’ve ever wanted is on the
other side of fear." -- George Addair

On the one hand, “What if such access could deliver the means to stop crime, aid
public safety and stop child exploitation?” But on the other hand, “The idea of
someone being able to look into all private conversations, all the data
connected to an individual, feels exposing and vulnerable in unimaginable ways.”
As a security practitioner he has both moral and practical concerns. “Even if
lawful access isn’t the same as mass surveillance, it would be difficult to
distinguish between ‘good’ and ‘bad’ users without analyzing them all.” Morally,
it is a reversal of the presumption of innocence and means no-one can have any
guaranteed privacy. Professionally he says, “Once the encryption can be broken,
once there is a backdoor allowing someone to access data, trust in that vendor
will lessen due to the threat to security and privacy introducing another attack
vector into the equation.” It is this latter point that is the focus for most
security practitioners. “From a practitioner’s standpoint,” says Rob T Lee,
chief of research at SANS Institute and founder at Harbingers, “we’ve seen time
and again that once a vulnerability exists, it doesn’t stay in the hands of the
‘good guys’ for long. It becomes a target. And once it’s exploited, the damage
isn’t theoretical. It affects real people, real businesses, and critical
infrastructure.”

Kumaraswamy is always thinking about talent and technology in cybersecurity.
Talent is a perennial concern in the industry, and Visa is looking to grow its
own. The Visa Payments Learning Program, launched in 2023, aims to help close
the skills gap in cyber through training and certification. “We are offering
this to all of the employees. We’re offering it to our partners, like the banks,
our customers,” says Kumaraswamy. Right now, Visa leverages approximately 115
different technologies in cyber, and Kumaraswamy is constantly evaluating where
to go next. “How do I [get to] the 116th, 117th, 181th?” he asks. ”That needs to
be added because every layer counts.” Of course, GenAI is a part of that
equation. Thus far, Kumaraswamy and his team are exploring more than 80
different GenAI initiatives within cyber. “We’ve already taken about three to
four of those initiatives … to the entire company. That includes the what we
call a ‘shift left’ process within Visa. It is now enabled with agentic AI. It’s
reducing the time to find bugs in the code. It is also helping reduce the time
to investigate incidents,” he shares. Visa is also taking its best practices in
cybersecurity and sharing them with their customers. “We can think of this as
value-added services to the mid-size banks, the credit unions, who don’t have
the scale of Visa,” says Kumaraswamy.
/dq/media/media_files/2025/04/12/cA3PUm28r0Rza0fCCeLx.jpg)
To function effectively, digital agents need memory. This is where memory
modules come into play. These components store key facts about ongoing
interactions, such as the customer’s vehicle preferences, budget, and previous
questions. For instance, if a returning visitor had previously shown interest in
SUVs under a specific price range, the memory module allows the AI to recall
that detail. Instead of restarting the conversation, the agent can pick up where
it left off, offering an experience that feels personalised and informed. Memory
modules are critical for maintaining consistency across long or repeated
interactions. Without them, agentic AI would struggle to replicate the attentive
service provided by a human salesperson who remembers returning customers. ...
Despite the intelligence of agentic AI, there are scenarios where human
involvement is still needed. Whether due to complex financing questions or
emotional decision-making, some buyers prefer speaking to a person before
finalizing their decision. A well-designed agentic system should recognize when
it has reached the limits of its capabilities. In such moments, it should
facilitate a handover to a human representative. This includes summarizing the
conversation so far, alerting the sales team in real-time, and scheduling a
follow-up if required.

If your cloud provider were to suffer a massive and prolonged outage, that would
have major repercussions on your business. While that’s pretty unlikely if you
go with one of the hyperscalers, it’s possible with a more specialized vendor.
And even with the big players, you may discover annoyances, performance
problems, unanticipated charges, or other issues that might cause you to rethink
your relationship. Using services from multiple vendors makes it easier to end a
relationship that feels like it’s gone stale without you having to retool your
entire infrastructure. It can be a great means to determine which cloud
providers are best for which workloads. And it can’t hurt as a negotiating
tactic when contracts expire or when you’re considering adding new cloud
services. ... If you add more cloud resources by adding services from a
different vendor, you’ll need to put in extra effort to get the two clouds to
play nicely together, a process that can range from “annoying” to “impossible.”
Even after bridging the divide, there’s administrative overhead involved—it’ll
be harder to keep tabs on data protection and privacy, for instance, and you’ll
need to track cloud usage and the associated costs for multiple vendors. Network
bandwidth. Many vendors make it cheap and easy to move data to and within their
cloud, but might make you pay a premium to export it.
/articles/decentralized-architecture-advice-process/en/smallimage/decentralized-architecture-advice-process-thumbnail-1750145111292.jpg)
Decentralized architecture isn’t just a matter of system design - it’s a
question of how decisions get made, by whom, and under what conditions. In
theory, decentralization empowers teams. In practice, it often exposes a hidden
weakness: decision-making doesn’t scale easily. We started to feel the cracks as
our teams expanded quickly and our organizational landscape became more complex.
As teams multiplied, architectural alignment started to suffer - not because
people didn’t care, but because they didn’t know how or when to engage in
architectural decision-making. ... The shift from control to trust requires more
than mindset - it needs practice. We leaned into a lightweight but powerful
toolset to make decentralized decision-making work in real teams. Chief among
them is the Architectural Decision Record (ADR). ADRs are often misunderstood as
documentation artifacts. But in practice, they are confidence-building tools.
They bring visibility to architectural thinking, reinforce accountability, and
help teams make informed, trusted decisions - without relying on central
authority. ... Decentralized architecture works best when decisions don’t happen
in isolation. Even with good individual practices - like ADRs and advice-seeking
- teams still need shared spaces to build trust and context across the
organization. That’s where Architecture Advice Forums come in.

In their study, Aral and Ju found that human-AI pairs excelled at some tasks
and underperformed human-human pairs on others. Humans paired with AI were
better at creating text but worse at creating images, though campaigns from
both groups performed equally well when deployed in real ads on social media
site X. Looking beyond performance, the researchers found that the actual
process of how people worked changed when they were paired with AI .
Communication (as measured by messages sent between partners) increased for
human-AI pairs, with less time spent on editing text and more time spent on
generating text and visuals. Human-AI pairs sent far fewer social messages,
such as those typically intended to build rapport. “The human-AI teams focused
more on the task at hand and, understandably, spent less time socializing,
talking about emotions, and so on,” Ju said. “You don’t have to do that with
agents, which leads directly to performance and productivity improvements.” As
a final part of the study, the researchers varied the assigned personality of
the AI agents using the Big Five personality traits: openness,
conscientiousness, extraversion, agreeableness, and neuroticism. The AI
personality pairing experiments revealed that programming AI personalities to
complement human personalities greatly enhanced collaboration.

Depending on the industry, you may need to comply with different security
protocols, acts, certifications, and standards. If your company operates in a
highly regulated industry, like healthcare, technology, financial services,
pharmaceuticals, manufacturing, or energy, those security and compliance
regulations and protocols can be even more strict. Thus, to meet the
compliance stringent security requirements, your organization needs to
implement security measures, like role-based access controls, encryption,
ransomware protection measures — just to name RTOs and RPOs, risk-assessment
plans, and other compliance best practices… And, of course, a backup and
disaster recovery plan is one of them, too. It ensures that the company will
be able to restore its critical data fast, guaranteeing the data availability,
accessibility, security, and confidentiality of your data. ... Another issue
that is closely related to compliance is data retention. Some compliance
regulations require organizations to keep their data for a long time. As an
example, we can mention NIST’s requirements from its Security and Privacy
Controls for Information Systems and Organizations: “… Storing audit records
on separate systems or components applies to initial generation as well as
backup or long-term storage of audit records…”

Activity is not the same as progress. What good is work if it's just busy work
and not tackling the right tasks or goals? Here, Microsoft advises adopting
the Pareto Principle, which postulates that 20% of the work should deliver 80%
of the outcomes. And how does this involve AI? Use AI agents to handle
low-value tasks, such as status meetings, routine reports, and administrative
churn. That frees up employees to focus on deeper tasks that require the human
touch. For this, Microsoft suggested watching the leadership keynote from the
Microsoft 365 Community Conference on Building the Future Firm. ... Instead of
using an org chart to delineate roles and responsibilities, turn to a work
chart. A work chart is driven more by outcome, in which teams are organized
around a specific goal. Here, you can use AI to fill in some of the gaps,
again freeing up employees for more in-depth work. ... Finally, Microsoft
pointed to a new breed of professionals known as agent bosses. They handle the
infinite workday not by putting in more hours but by working smarter. One
example cited in the report is Alex Farach, a researcher at Microsoft. Instead
of getting swamped in manual work, Farach uses a trio of AI agents to act as
his assistants. One collects daily research. The second runs statistical
analysis. And the third drafts briefs to tie all the data together.

AIG and DG share common responsibilities in guiding data as a product that
AI systems create and consume, despite their differences. Both governance
programs evaluate data integration, quality, security, privacy, and
accessibility. For instance, both governance frameworks need to ensure
quality information meets business needs. If a major retailer discovered
their AI-powered product recommendation engine was suggesting irrelevant
items to customers, then DG and AIG would want the issue resolved. However,
either approach or a combination could be best to solving the problem.
Determining the right governance response requires analyzing the root issue.
... DG and AIG provide different approaches; which works best depends on the
problem. Take the example, above, of the inaccurate pricing information to a
customer in response to a query. The data governance team audits the product
data pipeline and finds inconsistent data standards and missing attributes
feeding into the AI model. However, the AI governance team also identifies
opportunities to enhance the recommendation algorithm’s logic for weighting
customer preferences. The retailer could resolve the data quality issues
through DG while AIG improved the AI model’s mechanics by taking a
collaborative approach with both data governance and AI governance
perspectives.

Surviving and mitigating such an attack requires moving beyond purely
technological solutions. While AI detection tools can help, the first and most
critical line of defense lies in empowering the human factor. A resilient
organization builds its bulwarks on human risk management and security awareness
training, specifically tailored to counter the mental manipulation inherent in
deepfake attacks. Rapidly deploy trained ambassadors. These are not IT security
personnel, but respected peers from diverse departments trained to coach
workshops. ... Leadership must address employees first, acknowledge the
incident, express understanding of the distress caused, and unequivocally state
the deepfake is under investigation. Silence breeds speculation and distrust.
There should be channels for employees to voice concerns, ask questions, and
access support without fear of retribution. This helps to mitigate panic and
rebuild a sense of community. Ensure a unified public response, coordinating
Comms, Legal, and HR. ... The antidote to synthetic mistrust is authentic trust,
built through consistent leadership, transparent communication, and demonstrable
commitment to shared values. The goal is to create an environment where
verification habits are second nature. It’s about discerning malicious
fabrication from human error or disagreement.
No comments:
Post a Comment