Quote for the day:
"Listening to the inner voice &
trusting the inner voice is one of the most important lessons of leadership."
-- Warren Bennis

There's no denying that AI dramatically changes the way coders work. Generative
AI tools can substantially speed up the process of writing code. Agentic AI can
help automate aspects of the SDLC, like integrating and deploying code. ... Even
when AI generates and manages code, an understanding of concepts like the
differences between programming languages or how to mitigate software security
risks is likely to spell the difference between the ability to create apps that
actually work well and those that are disasters from a performance, security,
and maintainability standpoint. ... NoOps — short for "no IT operations" —
theoretically heralded a world in which IT automation solutions were becoming so
advanced that there would soon no longer be a need for traditional IT operations
at all. Incidentally, NoOps, like AppGen, was first promoted by a Forrester
analyst. He predicted that, "using cloud infrastructure-as-a-service and
platform-as-a-service to get the resources they need when they need them,"
developers would be able to automate infrastructure provisioning and management
so completely that traditional IT operations would disappear. That never
happened, of course. Automation technology has certainly streamlined IT
operations and infrastructure management in many ways. But it has hardly
rendered IT operations teams unnecessary.
One of the most common pain points? Mismatched expectations. “Gen Z wants
transparency—they want to know the 'why' behind decisions,” Kaushal explains.
That means decisions around promotions, performance feedback, or even task
allocation need to come with context. At the same time, Gen Z thrives on
real-time feedback. What might seem like an eager question to them can feel like
pushback to a manager conditioned by hierarchies. Add in Gen Z’s openness about
mental health and wellbeing, and many managers find themselves ill-equipped for
conversations they’ve never been trained to have. ... There is a growing
cultural narrative that managers must be mentors, coaches, culture carriers, and
counsellors—all while delivering on business targets. Kaushal doesn’t buy it.
“We’re burning people out by expecting them to be everything to everyone,” he
says. Instead, he proposes a model of shared leadership, where different aspects
of people development are distributed across roles. “Your direct manager might
help you with your day-to-day work, while a mentor supports your career
development. HR might handle cultural integration,” Kaushal explains. ... When
asked whether companies should focus on redesigning manager roles or reshaping
Gen Z onboarding, Kaushal is clear: “Redesign manager roles.”

Unlike LLMs, which can require billions of parameters and heavy computational
power, White-Basilisk is compact, with just 200 million parameters. Yet it
outperforms models more than 30 times its size on multiple public benchmarks for
vulnerability detection. This challenges the idea that bigger models are always
better, at least for specialized security tasks. White-Basilisk’s design focuses
on long-range code analysis. Real-world vulnerabilities often span multiple
files or functions. Many existing models struggle with this because they are
limited by how much context they can process at once. In contrast,
White-Basilisk can analyze sequences up to 128,000 tokens long. That is enough
to assess entire codebases in a single pass. ... White-Basilisk is also
energy-efficient. Because of its small size and streamlined design, it can be
trained and run using far less energy than larger models. The research team
estimates that training produced just 85.5 kilograms of CO₂. That is roughly the
same as driving a gas-powered car a few hundred miles. Some large models emit
several tons of CO₂ during training. This efficiency also applies at runtime.
White-Basilisk can analyze full-length codebases on a single high-end GPU
without needing distributed infrastructure. That could make it more practical
for small security teams, researchers, and companies without large cloud
budgets.

The core advantage of adaptive modular infrastructure lies in its ability to
deliver unprecedented speed-to-market. By manufacturing repeatable, standardized
modules at dedicated fabrication facilities, construction teams can bypass many
of the delays associated with traditional onsite assembly. Modules are produced
concurrently with the construction of the base building. Once the base reaches a
sufficient stage of completion, these prefabricated modules are quickly
integrated to create a fully operational, rack-ready data center environment.
This “plug-and-play” model eliminates many of the uncertainties in traditional
construction, significantly reducing project timelines and enabling customers to
rapidly scale their computing resources. Flexibility is another defining
characteristic of adaptive modular infrastructure. The modular design approach
is inherently versatile, allowing for design customization or standardization
across multiple buildings or campuses. It also offers a scalable and adaptable
foundation for any deployment scenario – from scaling existing cloud
environments and integrating GPU/AI generation and reasoning systems to
implementing geographically diverse and business-adjacent agentic AI – ensuring
customers achieve maximum return on their capital investment.

Distillation is a common technique in AI application development. It involves
training a smaller “student” model to mimic the outputs of a larger, more
capable “teacher” model. This process is often used to create specialized
models that are smaller, cheaper and faster for specific applications.
However, the Anthropic study reveals a surprising property of this process.
The researchers found that teacher models can transmit behavioral traits to
the students, even when the generated data is completely unrelated to those
traits. ... Subliminal learning occurred when the student model acquired the
teacher’s trait, despite the training data being semantically unrelated to it.
The effect was consistent across different traits, including benign animal
preferences and dangerous misalignment. It also held true for various data
types, including numbers, code and CoT reasoning, which are more realistic
data formats for enterprise applications. Remarkably, the trait transmission
persisted even with rigorous filtering designed to remove any trace of it from
the training data. In one experiment, they prompted a model that “loves owls”
to generate a dataset consisting only of number sequences. When a new student
model was trained on this numerical data, it also developed a preference for
owls.
Data scientists and analysts often focus on building the most advanced models.
However, they often overlook the importance of positioning their work to enable
executive decisions. As a result, executives frequently find it challenging to
gain useful insights from the overwhelming volume of data and metrics. Despite
the technical depth of modern analytics, decision paralysis persists, and
insights often fall short of translating into tangible actions. At its core,
this challenge reflects an insight-to-impact disconnect in today’s business
analytics environment. Many teams mistakenly assume that model complexity and
output sophistication will inherently lead to business impact. ... Many models
are built to optimize a singular objective, such as maximizing revenue or
minimizing cost, while overlooking constraints that are difficult to quantify
but critical to decision-making. ... Executive confidence in analytics is
heavily influenced by the ability to understand, or at least contextualize,
model outputs. Where possible, break down models into clear, explainable steps
that trace the journey from input data to recommendation. In cases where
black-box AI models are used, such as random forests or neural networks, support
recommendations with backup hypotheses, sensitivity analyses, or secondary
datasets to triangulate your findings and reinforce credibility.

In the years since GDPR’s implementation, the shift from reactive compliance to
proactive data governance has been noticeable. Data protection has evolved from
a legal formality into a strategic imperative — a topic discussed not just in
legal departments but in boardrooms. High-profile fines against tech giants have
reinforced the idea that data privacy isn’t optional, and compliance isn’t just
a checkbox. That progress should be acknowledged — and even celebrated — but we
also need to be honest about where gaps remain. Too often GDPR is still treated
as a one-off exercise or a hurdle to clear, rather than a continuous, embedded
business process. This short-sighted view not only exposes organisations to
compliance risks but causes them to miss the real opportunity: regulation as an
enabler. ... As organisations embed AI deeper into their operations, it’s time
to ask the tough questions around what kind of data we’re feeding into AI, who
has access to AI outputs, and if there’s a breach – what processes we have in
place to respond quickly and meet GDPR’s reporting timelines. Despite the
urgency, there’s still a glaring gap of organisations that don’t have a formal
AI policy in place, which exposes organisations to privacy and compliance risks
that could have serious consequences. Especially when data loss prevention is a
top priority for businesses.
CISOs overestimate alignment on core responsibilities like budgeting and
strategic cybersecurity goals, while boards demand clearer ties to business
outcomes. Another area of tension is around compliance and risk. Boards tend to
view regulatory compliance as a critical metric for CISO performance, whereas
most security leaders view it as low impact compared to security posture and
risk mitigation. ... security is increasingly viewed as a driver of digital
trust, operational resilience, and shareholder value. Boards are expecting CISOs
to play a key role in revenue protection and risk-informed innovation,
especially in sectors like financial services, where cyber risk directly impacts
customer confidence and market reputation. In India’s fast-growing digital
economy, this shift empowers security leaders to influence not just
infrastructure decisions, but the strategic direction of how businesses build,
scale, and protect their digital assets. Direct CEO engagement is making
cybersecurity more central to business strategy, investment, and growth. ...
When it comes to these complex cybersecurity subjects, the alignment between
CXOs and CISOs is uneven and still maturing. Our findings show that while 53 per
cent of CISOs believe AI gives attackers an advantage (down from 70 per cent in
2023), boards are yet to fully grasp the urgency.

It turns out, however, that chaos is not ultimately and entirely unpredictable
because of a property known as synchronization. Synchronization in chaos is
complex, but ultimately it means that despite their inherent unpredictability
two outcomes can become coordinated under certain conditions. In effect, chaos
outcomes are unpredictable but bounded by the rules of synchronization. Chaos
synchronization has conceptual overlaps with Carl Jung’s work, Synchronicity: An
Acausal Connecting Principle. Jung applied this principle to ‘coincidences’,
suggesting some force transcends chance under certain conditions. In chaos
theory, synchronization aligns outcomes under certain conditions. ... There are
three important effects: data goes in and random chaotic noise comes out; the
feed is direct RTL; there is no separate encryption key required. The
unpredictable (and therefore effectively, if not quite scientifically)
unbreakable chaotic noise is transmitted over the public network to its
destination. All of this is done at the hardware – so, without physical access
to the device, there is no opportunity for adversarial interference. Decryption
involves a destination receiver running the encrypted message through the same
parameters and initial conditions, and using the chaos synchronization property
to extract the original message.

Chris Kronenthal, president and CTO at FreedomPay, said giving credit to the
right people means business leaders must create an environment where they can
judge employee contributions qualitatively and quantitatively. "We'll have high
performers and people who aren't doing so well," he said. "It's important to
force your managers to review everyone objectively. And if they can't, you're
doing the entire team a disservice because people won't understand what
constitutes success." ... "Anyone shying away from measurement is not set up for
success," he said. "A good performer should want to be measured because they're
comfortable with how hard they're working." He said quantitative measures can be
used to prompt qualitative debates about whether, for example, underperformers
need more training. ... Stephen Mason, advanced digital technologies manager for
global industrial operations at Jaguar Land Rover, said he relies on his
talented IT professionals to support the business strategy he puts in place. "I
understand the vision that the technology can help deliver," he said. "So there
isn't any focus on 'I' or 'me.' Every session is focused on getting the team
together and giving the right people the platform to talk effectively." Mason
told ZDNET that successful managers lean on experts and allow them to excel.
No comments:
Post a Comment