Quote for the day:
"Stop Judging people and start understanding people everyone's got a story" -- @PilotSpeaker
Stop calling it 'The AI bubble': It's actually multiple bubbles, each with a different expiration date
The AI ecosystem is actually three distinct layers, each with different
economics, defensibility and risk profiles. Understanding these layers is
critical, because they won't all pop at once. ... The most vulnerable segment
isn't building AI — it's repackaging it. These are the companies that take
OpenAI's API, add a slick interface and some prompt engineering, then charge
$49/month for what amounts to a glorified ChatGPT wrapper. Some have achieved
rapid initial success, like Jasper.ai, which reached approximately $42 million
in annual recurring revenue (ARR) in its first year by wrapping GPT models in a
user-friendly interface for marketers. But the cracks are already showing. ...
Economic researcher Richard Bernstein points to OpenAI as an example of the
bubble dynamic, noting that the company has made around $1 trillion in AI deals,
including a $500 billion data center buildout project, despite being set to
generate only $13 billion in revenue. The divergence between investment and
plausible earnings "certainly looks bubbly," Bernstein notes. ... But
infrastructure has a critical characteristic: It retains value regardless of
which specific applications succeed. The fiber optic cables laid during the
dot-com bubble weren’t wasted — they enabled YouTube, Netflix and cloud
computing. Twenty-five years ago, the original dot-com bubble burst after debt
financing built out fiber-optic cables for a future that had not yet arrived,
but that future eventually did arrive, and the infrastructure was there
waiting.Modernizing Network Defense: From Firewalls to Microsegmentation
For many years, network security has been based on the concept of a perimeter
defense, likened to a fortified boundary. The network perimeter functioned as
a protective barrier, with a firewall serving as the main point of access
control. Individuals and devices within this secured perimeter were considered
trustworthy, while those outside were viewed as potential threats. The
"perimeter-centric" approach was highly effective when data, applications, and
employees were all located within the physical boundaries of corporate
headquarters. In the current environment, however, this model is considered
not only obsolete but also poses significant risks. ... Microsegmentation
significantly mitigates the impact of cyberattacks by transitioning from
traditional perimeter-based security to detailed, policy-driven isolation at
the level of individual workloads, applications, or containers. By
establishing secure enclaves for each asset, it ensures that if a device is
compromised, attackers are unable to traverse laterally to other systems. ...
Microsegmentation solutions offer detailed insights into application
dependencies and inter-server traffic flows, uncovering long-standing
technical debt such as unplanned connections, outdated protocols, and
potentially risky activities that may not be visible to perimeter-based
defenses. ... One significant factor deterring organizations from implementing
microsegmentation is the concern regarding increased complexity. Human-in-the-loop has hit the wall. It’s time for AI to oversee AI
This is not a hypothetical future problem. Human-centric oversight is already
failing in production. When automated systems malfunction — flash crashes in
financial markets, runaway digital advertising spend, automated account
lockouts or viral content — failure cascades before humans even realize
something went wrong. In many cases, humans were “in the loop,” but the
loop was too slow, too fragmented or too late. The uncomfortable reality is
that human review does not stop machine-speed failures. At best, it explains
them after the damage is done. Agentic systems raise the stakes dramatically.
Visualizing a multistep agent workflow with tens or hundreds of nodes often
results in dense, miles-long action traces that humans cannot realistically
interpret. As a result, manually identifying risks, behavior drift or
unintended consequences becomes functionally impossible. ... Delegating
monitoring tasks to AI does not eliminate human accountability. It
redistributes it. This is where trust often breaks down. Critics worry that AI
governing AI is like trusting the police to govern themselves. That analogy
only holds if oversight is self-referential and opaque. The model that works
is layered, with a clear separation of powers. ... Humans shift from reviewing
outputs to designing systems. They focus on setting operating standards and
policies, defining objectives and constraints, designing escalation paths and
failure modes, and owning outcomes when systems fail.Building leaders in the age of AI
The leaders who end up thriving in the AI era will be those who blend human depth with digital fluency. They will use AI to think with them, not for them. And they will treat this AI moment not as a threat to their leadership but as an opportunity to focus on those elements of their portfolios that only humans can excel at. ... Leaders will need to give teams a set of guardrails (clear values and decision rights) and establish new definitions of quality while fostering a sense of trust and collaboration as new challenges emerge and business conditions evolve. ... Aspiration, judgment, and creativity are “only human” leadership traits—and the characteristics that can provide an irreplaceable competitive edge, especially when amplified using AI. It’s therefore incumbent upon organizations to actively identify and develop the individuals who demonstrate critical intrinsics like resilience, eagerness to learn from mistakes, and the ability to work in teams that will increasingly include both humans and AI agents. ... Organizations must actively cultivate core leadership qualities such as wisdom, empathy, and trust—and they must give the development of these attributes the same attention they do to the development of new IT systems or operating models. That will mean providing time for leaders to do the inner work required to lead others effectively—that is, reflecting, sharing insights with other C-suite leaders, and otherwise considering what success will mean for themselves and the organization.The Rising Phoenix of Software Engineering
Software is undergoing a tectonic transformation. Modern applications are no longer hand-crafted from scratch. They are assembled from third-party components, APIs, open-source packages, machine-learning models, and now AI-generated snippets. Artificial intelligence, low-code tools, Open-Source Software (OSS), reusable libraries have made the act of writing new code less central to building software than ever before. ... In this new era, the primary challenge is not about builder software faster, cheaper, or more feature-rich. It is how to engineer software safely and predictably in a hostile ecosystem. ... Software engineering, as a discipline, must rise again — not as a metaphor for resilience, but as a mandate for survival. ... The future does not eliminate developers or coders. Assembling, customizing, and scripting third-party components will remain critical. But the accountability layer must shift upward, to professionals trained to reason about system safety, dependencies, and security by design. In other words, software engineers must reemerge as true engineers responsible for understanding not only how their code works, but how and where it runs… and most critically how to secure it. ... To engineer software responsibly, practitioners must model threats, evaluate anti-tamper capabilities, and verify that each dependency meets a baseline of assurance. These tasks were historically reserved for penetration testers or quality assurance (QA) teams.The concerning cyber-physical security disconnect
The background of many physical security professionals is in military and law
enforcement, which change much slower, but are known for extensive training.
The nature of the threats they need to defend against is evolving at a slower
pace, and destructive, kinetic threats remain a primary concern. ... The focus
of cybersecurity is much more on the insides of an organization. Detection is
supposed to catch attackers lurking on compromised devices. Response
activities have to consider the entire infrastructure rather than individual
hosts. Security measures are spread out across the network, taking a
defense-in-depth approach. Physical security is much more outward looking,
trying to prevent threats from entering. Detection systems exist within
premises, but focus on the outer layers. Response activities are focused on
evicting individual threats or denying their access. The majority of security
efforts focuses on the perimeter. ... Companies often handle both topics in
different teams. Conferences and publications may feature both topics, but
often focus on one and rarely address their interdependence. Security
assessments like pentests and red team exercises sometimes include a physical
component that tends to focus on social engineering without involving deep
physical security expertise. ... Risks, especially in the form of human threat
actors, will always look for the easiest way to materialize. Therefore, they
will attack physical assets via their digital components and vice versa, if
these flanks are not protected.
Architecting Agility: Decoupled Banking Systems Will Be the Key to Success in 2026
The banking industry is undergoing an evolutionary and market-driven shift.
Digital banking systems, once rigid and monolithic, are being reimagined through
decoupled architecture, AI-driven intelligence, programmatic technology
consumption, and fintech innovation and partnerships. ... Delay is no longer an
option — the future of banking is already being built today. To capitalize on
these innovations, tech leaders must prioritize digital core banking agility,
ensuring integration with new innovations and adapting to evolving market
demands. ... Identify suspicious patterns in real time. As illustrated in the
figure, a decoupled risk analytics gateway and prompt engine streamlines
regulatory reporting and ensures adherence to evolving rules (regtech). Whitney
Morgan, vice president at Skaleet, a fintech provider, states that generative AI
takes this a notch further by automating regulatory reporting and accelerating
product development. ... AI-enabled risk management empowers banks to detect
anomalies across large translation datasets with the speed and accuracy that
manual processes can’t match. Risk modeling and stress testing will enhance
credit risk scoring, market risk simulations, and scenario analysis that drive
preemptive and revenue options. ... The banking and financial services
innovation race, with challenges in adoption and capturing market advantages,
beckons leaders to be nimble and, at the same time, stay focused on the
fundamentals. CIOs, CTOs, and other tech leaders can take proactive steps to
strike the right balance.Key Management Testing: The Most Overlooked Pillar of Crypto Security
The majority of security testing in crypto projects focuses on code correctness or operational attacks. Key management, however, is mainly considered a procedural issue rather than a technical problem. This is a dangerous false belief. Entropy sources, hardware integrity, and cryptographic integrity are key to generating. Ineffective randomness, broken device software or a corrupted environment may lead to keys that seem valid but are appallingly weak to attack. The testing mechanisms used to create new wallet addresses for users must be watertight when an exchange generates millions of new addresses. Testing should also be done on key storage. ... The recovery process is one of the most vulnerable areas of key management, yet it is discussed least. Backup and restoration are prone to human error, improperly configured storage, or unsafe transmission. The unfortunate fact about crypto is that recovery mechanisms can be either a saviour or a disaster. Recovery phrases, encrypted backups, and distributed shares need to be repeatedly tested in a real-world, adversarial environment. ... End-to-end lifecycle testing, automatic verification of key states, automated attack simulations and automated recovery protocols that self-heal will be the order of the day. The industry has already become such that key management is no longer a concealed or even supporting part of the security strategies.Inside the Chip: How Hardware Root of Trust Shifts the Odds Back to Cyber Defenders
Defenders often lack direct control or visibility into the hardware layer where
workloads actually execute. This abstraction can obscure low-level threats,
allowing attackers to manipulate telemetry, disable software protections, or
persist beyond reboots. Crucially, modern attacks are not brute force attempts
to break encryption or overwhelm defences. They exploit the assumptions built
into how systems start, update, and prove what’s genuine. ... At the centre of
this shift is Hardware Root of Trust (HRoT): a security architecture that embeds
trust directly into the hardware layer of a device. US National Institute of
Standards and Technology (NIST) defines it as “an inherently trusted combination
of hardware and firmware that maintains the integrity of information.” In
practice, HRoT serves as the anchor for system trust from the moment power is
applied. ... For CISOs, HRoT represents an opportunity to strengthen resilience,
meet regulatory demands, and finally realise true zero trust. From a resilience
standpoint, it changes the balance between prevention and response. By
validating integrity from power-on and continuously during operation, it reduces
reliance on post-incident investigation and recovery. Compromised devices and
systems are stopped early, limiting blast radius and disruption. Regulators are
already reinforcing this direction. Frameworks such as the US Department of
Defense’s CMMC explicitly highlight HRoT as a stronger foundation for
assurance.
What AI skills job seekers need to develop in 2026
One of the earliest AI skills involved prompt engineering — being able to get to
the necessary AI-generated results by using the right questions. But that
baseline skill is being pushed aside by “context engineering.” Think of context
engineering as prompt engineering on steroids; it involves developing prompts
that can deliver consistent and predictive answers. Ideally, “everytime you ask
the same question, you always get the same answer,” said Bekir Atahan, vice
president at Experis Services, a division of Manpower Group. That skill is
critical because AI models are changing quickly, and the answers they spout out
can differ from day to day. Context engineering is aimed at ensuring consistent
outputs despite a rapidly evolving AI ecosystem. ... “Beyond algorithms and
coding, the next wave of AI talent must bridge technology, governance and
organizational change. The most valuable AI skill in 2026 isn’t coding, it’s
building trust,” Seth said. Along those lines, he recommended that job seekers
immerse themselves in the technology beyond simply taking a class. “Instead of a
course, go to any conference,” Seth said. ... In hiring, genuine AI capability
shows up through curiosity and real experience, Blackford said. “Strong
candidates can talk honestly about something they tried, what did not work, and
what they learned,” he said ... “Things are evolving at such a fast pace
that there will be no perfect set of skills,” said Seth. “I would say more than
skills, attitudes are more important — that adaptability to change, how quick
you are to learn things.”
No comments:
Post a Comment