Quote for the day:
“The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things.” -- Ronald Reagan
Understanding Problems in the Data Supply Chain: A Q&A with R Systems’ AI Director Samiksha Mishra
Think of data as moving through a supply chain: it’s sourced, labeled,
cleaned, transformed, and then fed into models. If bias enters early – through
underrepresentation in data collection, skewed labeling, or feature
engineering – it doesn’t just persist but multiplies as the data moves
downstream. By the time the model is trained, bias is deeply entrenched, and
fixes can only patch symptoms, not address the root cause. Just like supply
chains for physical goods need quality checks at every stage, AI systems need
fairness validation points throughout the pipeline to prevent bias from
becoming systemic. ... The key issue is that a small representational bias can
be significantly amplified across the AI data supply chain due to reusability
and interdependencies. When a biased dataset is reused, its initial flaw is
propagated to multiple models and contexts. This is further magnified during
preprocessing, as methods like feature scaling and augmentation can encode a
biased feature into multiple new variables, effectively multiplying its
weight. ... One effective way to integrate validation layers and bias filters
into AI systems without sacrificing speed is to design them as lightweight
checkpoints throughout the pipeline rather than heavy post-hoc add-ons. At the
data stage, simple distributional checks such as χ² tests or KL-divergence can
flag demographic imbalances at low computational cost. Hackers Manipulate Claude AI Chatbot as Part of at Least 17 Cyber Attacks
While AI’s use in hacking has largely been a case of hype over actual threat
to present, this new development is a concrete indicator that it is at minimum
now substantially lowering the threshold for non-technical actors to execute
viable cyber attacks. It is also clearly capable of speeding up and automating
certain common aspects of attacks for the more polished professional hackers,
increasing their output capability during windows in which they have the
element of surprise and novelty. While the GTG-2002 activity is the most
complex thus far, the threat report notes the Claude AI chatbot has also been
successfully used for more individualized components of various cyber attacks.
This includes use by suspected North Korean state-sponsored hackers as part of
their remote IT worker scams, to include not just crafting detailed personas
but also taking employment tests and doing day-to-day work once hired. Another
highly active party in the UK has been using Claude to develop individual
ransomware tools with sophisticated capabilities and sell them on underground
forums, at a price of $400 to $1,200 each. ... Anthropic says that it has
responded to the cyber attacks by adding a tailored classifier specifically
for the observed activity and a new detection method to ensure similar
activity is captured by the standard security pipeline. Agentic AI: Storage and ‘the biggest tech refresh in IT history’
The interesting thing about agentic infrastructure is that agents can
ultimately work across a number of different datasets, and even in different
domains. You have kind of two types of agents – workers, and other agents,
which are supervisors or supervisory agents. So, maybe I want to do something
simple like develop a sales forecast for my product while reviewing all the
customer conversations and the different databases or datasets that could
inform my forecast. Well, that would take me to having agents that work on and
process a number of different independent datasets that may not even be in my
datacentre. ... So, anything that requires analytics requires a data
warehouse. Anything that requires an understanding of unstructured data not
only requires a file system or an object storage system, but it also requires
a vector database to help AI agents understand what’s in those file systems
through a process called retrieval augmented generative AI. The first thing
that needs to be wrestled down is a reconciliation of this idea that there’s
all sorts of different data sources, and all of them need to be modernised or
ready for the AI computing that is about to hit these data sources. ... The
first thing I would say is that there are best practices out in the market
that should definitely be adhered to. Tech leaders: Are you balancing AI transformation with employee needs?
On the surface, it might seem naïve for companies to talk about AI building
people up and improving jobs when there’s so much negative news about its
potential impact on employment. For example, Ford CEO Jim Farley recently
predicted that AI will replace half of all white-collar workers in the US. Also,
Fiverr CEO Micha Kaufman sent a memo to his team in which he said, “AI is coming
for your job. Heck, it’s coming for my job, too. This is a wake-up call. It
doesn’t matter if you’re a programmer, designer, product manager, data
scientist, lawyer, customer support rep, salesperson, or a finance person. AI is
coming for you.” Several tech companies like Google, Microsoft, Amazon, and
Salesforce have also been talking about how much of their work is already being
done by AI. Of course, tech executives could just be hyping the technology they
sell. But not all AI-related layoffs may actually be due to AI. ... AI,
especially agentic AI, is changing the nature of work, and how companies will
need to be organized, says Mary Alice Vuicic, chief people officer at Thomson
Reuters. “Many companies ripped up their AI plans as agentic AI came to the
forefront,” she says, as it’s moved on from being an assistant to being a team
that works together to accomplish delegated tasks. This has the potential for
unprecedented productivity improvements, but also unprecedented opportunities
for augmentation, expansion, and growth. When rivals come fishing: What keeps talent from taking the bait
Organisations can and do protect themselves with contracts—non-compete
agreements, non-solicitation rules, confidentiality policies. They matter
because they protect sensitive knowledge and prevent rivals from taking
shortcuts. But they are not the same as retention. An employee with ambition, if
disengaged, will eventually walk. ... If money were the sole reason employees
left, the problem would be simpler. Counter-offers would solve it, at least
temporarily. But every HR leader knows the story: a high performer accepts a
lucrative counter-offer, only to resign again six months later. The issue lies
elsewhere—career stagnation, lack of recognition, weak culture, or a disconnect
with leadership. ... What works instead is open dialogue, competitive but fair
rewards, and most importantly, visible career pathways. Employees, she stresses,
need to feel that their organisation is invested in their long-term development,
not just scrambling to keep them for another year. Tiwari also highlights
something companies often neglect: succession planning. By identifying and
nurturing future leaders early, organisations create continuity and reduce the
shock when someone does leave. Alongside this, clear policies and awareness
about confidentiality ensure that intellectual property remains protected even
in times of churn. The recent frenzy of AI talent raids among global tech giants
is an extreme example of this battle.
Agentic AI: A CISO’s security nightmare in the making?
CISOs don’t like operating in the dark, and this is one of the risks agentic AI
brings. It can be deployed autonomously by teams or even individual users
through a variety of applications without proper oversight from security and IT
departments. This creates “shadow AI agents” that can operate without controls
such as authentication, which makes it difficult to track their actions and
behavior. This in turn can pose significant security risks, because unseen
agents can introduce vulnerabilities. ... Agentic AI introduces the ability to
make independent decisions and act without human oversight. This capability
presents its own cybersecurity risk by potentially leaving organizations
vulnerable. “Agentic AI systems are goal-driven and capable of making decisions
without direct human approval,” Joyce says. “When objectives are poorly scoped
or ambiguous, agents may act in ways that are misaligned with enterprise
security or ethical standards.” ... Agents often collaborate with other agents
to complete tasks, resulting in complex chains of communication and
decision-making, PwC’s Joyce says. “These interactions can propagate sensitive
data in unintended ways, creating compliance and security risks,” he says. ...
Many early stage agents rely on brittle or undocumented APIs or browser
automation, Mayham says. “We’ve seen cases where agents leak tokens via poorly
scoped integrations, or exfiltrate data through unexpected plugin chains. The
more fragmented the vendor stack, the bigger the surface area for something like
this to happen,” he says. How To Get The Best Out Of People Without Causing Burnout At Work
Comfort zones feel safe, but they also limit growth. Employees who stick with
what they know may appear steady, but eventually they stagnate. Leaders who let
people stay in their comfort zones for too long risk creating teams that lack
adaptability. At the same time, pushing too aggressively can backfire. People
who are stretched too far too quickly often feel stress and that drains
motivation. This is when burnout at work begins. The real challenge is knowing
how to respect comfort zones while creating enough stretch to build confidence.
... Gallup’s research shows that employees who use their strengths daily are six
times more likely to be engaged. Tom Rath, co-author of StrengthsFinder, told me
that leaning into natural talents is often the fastest path to confidence and
performance gains. At the same time, he cautioned me against the idea that we
should only focus on strengths. He said it is just as reckless to ignore
weaknesses as it is to ignore strengths. His point was that leaders need
balance. Too much time spent on weaknesses drains confidence, but avoiding them
altogether prevents people from growing. ... It is not always easy to tell if
resistance is fear or indifference. Fear usually comes with visible anxiety. The
employee avoids the task but also worries about it. Laziness looks more like
indifference with no visible discomfort. Leaders can uncover the difference by
asking questions. If it is fear, support and small steps can help. If it is
indifference, accountability and clear expectations may be the solution.
IT Leadership Takes on AGI
“We think about AGI in terms of stepwise progress toward machines that can go
beyond visual perception and question answering to goal-based decision-making,”
says Brian Weiss, chief technology officer at hyperautomation and enterprise AI
infrastructure provider Hyperscience, in an email interview. “The real shift
comes when systems don’t just read, classify and summarize human-generated
document content, but when we entrust them with the ultimate business
decisions.” ... OpenAI’s newly released GPT-5 isn’t AGI, though it can
purportedly deliver more useful responses across different domains. Tal Lev-Ami,
CTO and co-founder of media optimization and visual experience platform provider
Cloudinary, says “reliable” is the operative word when it comes to AGI. ... “We
may see impressive demonstrations sooner, but building systems that people can
depend on for critical decisions requires extensive testing, safety measures,
and regulatory frameworks that don't exist yet,” says Bosquez in an email
interview. ... Artificial narrow intelligence or ANI (what we’ve been using)
still isn’t perfect. Data is often to blame, which is why there’s a huge push
toward AI-ready data. Yet, despite the plethora of tools available to manage
data and data quality, some enterprises are still struggling. Without AI-ready
data, enterprises invite reliability issues with any form of AI. “Today’s
systems can hallucinate or take rogue actions, and we’ve all seen the
examples.
No comments:
Post a Comment