Quote for the day:
"Listening to the inner voice and trusting the inner voice is one of the most important lessons of leadership." -- Warren Bennis
How CISOs can talk cybersecurity so it makes sense to executives

“With complex technical topics and evolving threats to cover, the typical
brief time slot often proves inadequate for meaningful dialogue. Security
leaders can address this by preparing concise, business-focused briefing
materials in advance and prioritizing the most critical issues for discussion.
When time constraints persist, they should advocate for dedicated sessions to
ensure proper oversight of cybersecurity matters,” said Ross Young ...
When communicating with the board of directors, Turgal advises mapping
cybersecurity initiatives to shareholder value. “If the business goal is to
protect shareholder value, there is a direct connection to business continuity
and increased operational uptime.” To support that, security leaders might
increase cyber resilience through containerized immutable backups, disaster
recovery and incident response plans—tools that can mitigate brand-damaging
attacks and prevent stock price volatility. ... Some of the most
productive conversations don’t happen in meetings. They happen over coffee, or
on calls with individual board members. If possible, schedule one-on-ones
with directors to walk them through key risks. Ask what they want to know more
about. Find out how they prefer to receive information. By building rapport
outside the meeting, you’ll face fewer surprises inside it. Your strongest
allies in the boardroom are often the CFO and legal chief.
The great cognitive migration: How AI is reshaping human purpose, work and meaning

Human purpose and meaning are likely to undergo significant upheaval. For
centuries, we have defined ourselves by our ability to think, reason and
create. Now, as machines take on more of those functions, the questions of our
place and value become unavoidable. If AI-driven job losses occur on a large
scale without a commensurate ability for people to find new forms of
meaningful work, the psychological and social consequences could be profound.
It is possible that some cognitive migrants could slip into despair. AI
scientist Geoffrey Hinton, who won the 2024 Nobel Prize in physics for his
groundbreaking work on deep learning neural networks that underpin LLMs, has
warned in recent years about the potential harm that could come from AI. In an
interview with CBS, he was asked if he despairs about the future. He said he
did not because, ironically, he found it very hard to take [AI] seriously. He
said: “It’s very hard to get your head around the point that we are at this
very special point in history where in a relatively short time, everything
might totally change. A change on a scale we’ve never seen before. It’s hard
to absorb that emotionally.” There will be paths forward. Some researchers and
economists, including MIT economist David Autor, have begun to explore how AI
could eventually help rebuild middle-class jobs, not by replacing human
workers, but by expanding what humans can do.
CISO vs CFO: why are the conversations difficult?

The disconnect between CISOs and CFOs remains a challenge in many organizations.
While cybersecurity threats escalate in scale and complexity, senior leadership
often fails to fully grasp the magnitude of the risk. This gap is visible in
EY’s 2025 Cybersecurity study, which shows that 68% of CISOs worry that senior
leaders underestimate the risks. Progress in bridging this divide happens when
CISOs and CFOs are willing to meet halfway, aligning technical priorities with
financial realities. Argyle realized that to move the conversation forward, he
had to change his approach: he stopped defending the technology and started
showing the impact. ... Redesigning the relationship between a CISO and a CFO
isn’t something that’s fixed over a single meeting or a strong cup of coffee. It
takes time, mutual understanding, and open conversations. As Argyle points out,
these discussions shouldn’t be limited to budget season, when both sides are
already in negotiation mode. To truly build trust and alignment, CISOs and CFOs
need to keep the dialogue alive year-round and make efforts to understand each
other’s work, long before money is involved. “Ideally, I’d bring the CFO into
tabletop cyber crisis simulations and scenario planning,” he adds. “Let them see
the domino effect of a breach — not just read about it in a report. That
firsthand exposure builds understanding faster than any PowerPoint.”
How to Build a Team That Thinks and Executes Like a Founder

If your team has a deep understanding of what you are trying to accomplish, you
can ensure that everyone is rowing in the same direction. It isn't enough to
simply share your vision and goals. To really get the team engaged, it's
critical that they understand the underlying "why" behind your goals and
decisions. One of the best ways to do this is by being as transparent as
possible, such as sharing financial data and other key business metrics. This
information can help the team understand the bigger picture and connect how
their individual roles contribute to the overall success of the company. ...
First, stop assigning tasks to your team. Instead, give team members ownership
over entire end-to-end processes. This allows them to take full responsibility
for the success of this process and help you hold the team accountable for
executing it successfully. The best way to do this is by focusing on
outcome-based delegation. This provides flexibility and autonomy for the team to
figure out the best way to achieve the goal. As a business owner, you don't want
the team coming to you for every little decision. ... n many cases, a bad
deliverable is a result of miscommunication, unclear direction or not having
access to the right resources. The challenge is that many business owners give
up when delegation doesn't work the way they hoped the first time.
Quiet hiring: How HR can turn this trend into a winning strategy

At its heart, quiet hiring is about strategic talent management. It’s a way for
organisations to fill skill gaps and meet changing business needs without
expanding their workforce in the traditional sense. Instead of hiring full-time
employees, businesses tap into existing employees, freelancers, or contractors
to temporarily shift roles or tackle specific projects. It’s about working
smarter with the talent you already have, and supplementing that with external
experts when needed. ... Instead of looking outside the organisation to fill a
gap, businesses can move current employees into new roles or give them
additional responsibilities. For instance, if a marketing expert has experience
with analytics, they might temporarily shift to the data analytics team to
support a busy period. Not only does this save the company time and money in
recruitment, but it also develops your current team, gives employees fresh
opportunities, and fosters an agile workforce. It’s a win-win—employees gain new
skills, and organisations can fill critical gaps without the lengthy hiring
process. ... The business world is unpredictable, and the ability to adapt
quickly is more important than ever. Quiet hiring offers companies the
flexibility they need to respond to sudden changes. For example, if demand for a
product surges unexpectedly, internal employees can be quickly moved to meet the
increased workload, while contractors can be brought in to handle the temporary
increase in tasks.
Attack of the AI crawlers

To be fair, it’s not entirely clear that robots.txt directives are legally
enforceable, according to Susskind and other attorneys who focus on technology
issues. Therefore, if the model makers were arguing that they have the right to
violate those requests, that might be a legitimate argument. But that is not
what they are arguing. They are saying they abide by those rules, but then many
send out undeclared crawlers to do it anyway. The real problem is that they are
inflicting financial damage to the site owners by forcing them to pay far more
for bandwidth. And it is solely the model makers that benefit, not the site
owners. What is IT to do, Susskind asked, when an undeclared genAI crawler “hits
my site a million times a day”? Indeed, Susskind’s team has seen “a single bot
hitting a site millions of times per hour. That is several orders of magnitude
more burdensome than normal SEO crawling.” ... The problem, according to
attorneys in this space, is not with establishing monetary damages but with
attribution: how to determine who’s responsible for the surging traffic. In such
a hypothetical court case, the lawyers for the deep-pocketed genAI model makers
would likely argue that plaintiffs’ sites are visited by millions of users and
bots from multiple sources. Without proof tying traffic to a specific crawler or
tying a crawler to a specific model maker, the model maker can’t be held
accountable for plaintiffs’ financial damages.
A Farewell to APMs — The Future of Observability is MCP tools

Initially introduced by Anthropic, the Model Context Protocol (MCP) represents a
communication tier between AI agents and other applications, allowing agents to
access additional data sources and perform actions as they see fit. More
importantly, MCPs open up new horizons for the agent to intelligently choose to
act beyond its immediate scope and thereby broaden the range of use cases it can
address. The technology is not new, but the ecosystem is. In my mind, it is the
equivalent of evolving from custom mobile application development to having an
app store. ... With the advent of MCPs, software developers now have the choice
of adopting a different model for developing software. Instead of focusing on a
specific use case, trying to nail the right UI elements for hard-coded usage
patterns, applications can transform into a resource for AI-driven processes.
This describes a shift from supporting a handful of predefined interactions to
supporting numerous emergent use cases. ... Making observability useful to
the agent, however, is a little more involved than slapping on an MCP adapter to
an APM. Indeed, many of the current generation tools, in rushing to support the
new technology took that very route, not taking into consideration that AI
agents also have their limitations.
Knowing when to use AI coding assistants

AI performs exceptionally well with common coding patterns. Its sweet spot is
generating new code with low complexity when your objectives are well-specified
and you’re using popular libraries, says Swiber. “Web development, mobile
development, and relatively boring back-end development are usually fairly
straightforward,” adds Charity Majors, co-founder and CTO of Honeycomb. The more
common the code and the more online examples, the better AI models perform. ...
While AI accelerates development, it creates a new burden to review and validate
the resulting code. “In a worst-case scenario, the time and effort required to
debug and fix subtle issues in AI-generated code could even eclipse the time it
would require to write the code from scratch,” says Sonar’s Wang. Quality and
security can suffer from vague prompts or poor contextual understanding,
especially in large, complex code bases. Transformer-based models also face
limitations with token windows, making it harder to grasp projects with many
parts or domain-specific constraints. “We’ve seen cases where AI outputs are
syntactically correct but contain logical errors or subtle bugs,” Wang notes.
These mistakes originate from a “black box” process, he says, making AI risky
for mission-critical enterprise applications that require strict governance.
CISOs Take Note: Is Needless Cybersecurity Strangling Your Business?

"For IT and security teams, redundant and obsolete security tools or measures
increase workflows, hurt efficiency, and extend incident response and patch
time," he explains via email. "When there's excessive or ineffective tools in
the security stack, teams waste valuable time sifting through redundant and
low-value alerts, hampering them from focusing on real threats." ...
Additionally, excessive security controls, such as overly intrusive multi-factor
authentication, can create employee friction, slowing down and challenging
collaboration with partners, vendors, and customers, Shilts says. "This often
results in employees finding workarounds, such as using their personal emails,
which introduces security risks that are difficult to track and manage." ... In
general, an organizational security posture, including tools and procedures,
should be assessed annually or even earlier if a major change is implemented,
Biswas says. Ideally, to prevent conflicts of interest, such assessments should
be performed by independent, expert third parties. "After all, it’s difficult
for an implementor or operator to be a truly impartial assessor of their own
work," he explains. "While some organizations may be able to do so via internal
audit, for most it makes sense to hire an outsider to play devil’s advocate."
Machines Cannot Feel or Think, but Humans Can, and Ought To

In a philosophical debate, the question, as it is applied to AI, is: How do we
know that AI does not have an experience of the world? The same question could
be asked of flowers, animals, stones, and automobiles. In this sense, the
question of “other intelligences” is often quite valuable and holds tremendous
potential for escaping the capital-focused development of information processing
machines. In its most useful form, this approach to “post-humanism” refers to
the evolved understanding that humans are not the center of the universe, but
exist within a dense network of relationships. This definition of the post-human
may pave the way to decentering definitions of “human” that privilege human
needs over those of the environment, or even people whom we consider less-than.
It may cultivate a deeper appreciation for the complexity of animals and their
ecosystems, and, through careful design, might lead to an approach to
technological development that considers the interdependencies within systems as
connected, not isolated. Have we even started to build a capacity to understand
those worlds, to empathize with trees and rivers and elk, to the extent to which
we can now fully shift our attention to the potential emotional experiences of a
hypothetical Microsoft product?
No comments:
Post a Comment