Quote for the day:
“The only true wisdom is knowing that you know nothing.” -- Socrates
How CISOs can prepare for the new era of short-lived TLS certificates
“Shorter certificate lifespans are a gift,” says Justin Shattuck, CSO at
Resilience. “They push people toward better automation and certificate
management practices, which will later be vital to post-quantum defense.” But
this gift, intended to strengthen security, could turn into a curse if
organizations are unprepared. Many still rely on manual tracking and renewal
processes, using spreadsheets, calendar reminders, or system admins who “just
know” when certificates are due to expire. ... “We’re investing in a living
cryptographic inventory that doesn’t just track SSL/TLS certificates, but also
keys, algorithms, identities, and their business, risk, and regulatory context
within our organization and ties all of that to risk,” he says. “Every cert is
tied to an owner, an expiration date, and a system dependency, and supported
with continuous lifecycle-based communication with those owners. That
inventory drives automated notifications, so no expiration sneaks up on us.”
... While automation is important as certificates expire more quickly, how it
is implemented matters. Renewing a certificate a fixed number of days before
expiration can become unreliable as lifespans change. The alternative is
renewing based on a percentage of the certificate’s lifetime, and this method
has an advantage: the timing adjusts automatically when the lifespan shortens.
“Hard-coded renewal periods are likely to be too long at some point, whereas
percentage renewal periods should be fine,” says Josh Aas. How Enterprises Can Navigate Privacy With Clarity
There's an interesting pattern across organizations of all sizes. When we started discussing DPDPA compliance a year ago, companies fell into two buckets: those already building toward compliance and others saying they'd wait for the final rules. That "wait and see period" taught us a lot. It showed how most enterprises genuinely want to do the right thing, but they often don't know where to start. In practice, mature data protection starts with a simple question that most enterprises haven't asked themselves: What personal data do we have coming in? Which of it is truly personal data? What are we doing with it? ... The first is how enterprises understand personal data itself. I tell clients not to view personal data as a single item but as part of an interconnected web. Once one data point links to another, information that didn't seem personal becomes personal because it's stored together or can be easily connected. ... The second gap is organizational visibility. Some teams process personal data in ways others don't know about. When we speak with multiple teams, there's often a light bulb moment where everyone realizes that data processing is happening in places they never expected. The third gap is third-party management. Some teams may share data under basic commercial arrangements or collect it through processes that seem routine. An IT team might sign up for a new hosting service without realizing it will store customer personal data.How to succeed as an independent software developer
Income for freelance developers varies depending on factors such as location,
experience, skills, and project type. Average pay for a contractor is about
$111,800 annually, according to ZipRecruiter, with top earners making
potentially more than $151,000. ... “One of the most important ways to succeed
as an independent developer is to treat yourself like a business,” says Darian
Shimy, CEO of FutureFund, a fundraising platform built for K-12 schools, and a
software engineer by trade. “That means setting up an LLC or sole
proprietorship, separating your personal and business finances, and using
invoicing and tax tools that make it easier to stay compliant,” Shimy says.
... “It was a full-circle moment, recognition not just for coding expertise,
but for shaping how developers learn emerging technologies,” Kapoor says.
“Specialization builds identity. Once your expertise becomes synonymous with
progress in a field, opportunities—whether projects, media, or
publishing—start coming to you.” ... Freelancers in any field need to know how
to communicate well, whether it’s through the written word or conversations
with clients and colleagues. If a developer communicates poorly, even great
talent might not make the difference in landing gigs. ... A portfolio of work
tells the story of what you bring to the table. It’s the main way to showcase
your software development skills and experience, and is a key tool in
attracting clients and projects. AI in 5 years: Preparing for intelligent, automated cyber attacks
Cybercriminals are increasingly experimenting with autonomous AI-driven
attacks, where machine agents independently plan, coordinate, and execute
multi-stage campaigns. These AI systems share intelligence, adapt in real time
to defensive measures, and collaborate across thousands of endpoints —
functioning like self-learning botnets without human oversight. ... Recent
“vibe hacking” cases showed how threat actors embedded social-engineering
goals directly into AI configurations, allowing bots to negotiate, deceive,
and persist autonomously. As AI voice cloning becomes indistinguishable from
the real thing, verifying identity will shift from who is speaking to how
behaviourally consistent their actions are, a fundamental change in digital
trust models. ... Unlike traditional threats, machine-made attacks learn and
adapt continuously. Every failed exploit becomes training data, creating a
self-improving threat ecosystem that evolves faster than conventional
defences. Check Point Research notes that AI-driven tools like Hexstrike-AI
framework, originally built for red-team testing, was weaponised within hours
to exploit Citrix NetScaler zero-days. These attacks also operate with
unprecedented precision. ... Make DevSecOps a standard part of your AI
strategy. Automate security checks across your CI/CD pipeline to detect
insecure code, exposed secrets, and misconfigurations before they reach
production. Threat intelligence programs are broken, here is how to fix them
“An effective threat intelligence program is the cornerstone of a
cybersecurity governance program. To put this in place, companies must
implement controls to proactively detect emerging threats, as well as have an
incident handling process that prioritizes incidents automatically based on
feeds from different sources. This needs to be able to correlate a massive
amount of data and provide automatic responses to enhance proactive actions,”
says Carlos Portuguez ... Product teams, fraud teams, governance and
compliance groups, and legal counsel often make decisions that introduce new
risk. If they do not share those plans with threat intelligence leaders, PIRs
become outdated. Security teams need lines of communication that help them
track major business initiatives. If a company enters a new region, adopts a
new cloud platform, or deploys an AI capability, the threat model shifts. PIRs
should reflect that shift. ... Manual analysis cannot keep pace with the
volume of stolen credentials, stealer logs, forum posts, and malware data
circulating in criminal markets. Security engineering teams need automation to
extract value from this material. ... Measuring threat intelligence remains a
challenge for organizations. The report recommends linking metrics directly to
PIRs. This prevents metrics that reward volume instead of impact. ... Threat
intelligence should help guide enterprise risk decisions. It should influence
control design, identity practices, incident response planning, and long term
investment.Europe’s Digital Sovereignty Hinges on Smarter Regulation for Data Access
Europe must seek to better understand, and play into, the reality of market
competition in the AI sector. Among the factors impacting AI innovation,
access to computing power and data are widely recognized as most crucial.
While some proposals have been made to address the former, such as making the
continent’s supercomputers available to AI start-ups, little has been proposed
with regard to addressing the data access challenge. ... By applying the
requirement to AI developers independently of their provenance, the framework
ensures EU competitiveness is not adversely impacted. On the contrary, the
approach would enable EU-based AI companies to innovate with legal certainty,
avoiding the cost and potential chilling effect of lengthy lawsuits compared
to their US competitors. Additionally, by putting the onus on copyright owners
to make their content accessible, the framework reduces the burden for AI
companies to find (or digitize) training material, which affects small
companies most. ... Beyond addressing a core challenge in the AI market, the
example of the European Data Commons highlights how government action is not
just a zero-sum game between fostering innovation and setting regulatory
standards. By scrapping its digital regulation in the rush to boost the
economy and gain digital sovereignty, the EU is surrendering its longtime
ambition and ability to shape global technology in its image.
Recent advances in reinforcement learning with verifiable rewards (RLVR) have
significantly improved the reasoning abilities of large language models
(LLMs). RLVR trains LLMs to generate chain-of-thought (CoT) tokens (which
mimic the reasoning processes humans use) before generating the final answer.
This improves the model’s capability to solve complex reasoning tasks such as
math and coding. Motivated by this success, researchers have applied similar
RL-based methods to large multimodal models (LMMs), showing that the benefits
can extend beyond text to improve visual understanding and problem-solving
across different modalities. ... According to Zhang, the step-by-step process
fundamentally changes the reliability of the model's outputs. "Traditional
models often 'jump' directly to an answer, which means they explore only a
narrow portion of the reasoning space," he said. "In contrast, a
reasoning-first approach forces the model to explicitly examine multiple
intermediate steps... [allowing it] to traverse much deeper paths and arrive
at answers with far more internal consistency." ... The researchers also found
that token efficiency is crucial. While allowing a model to generate longer
reasoning steps can improve performance, excessive tokens reduce efficiency.
Their results show that setting a smaller "reasoning budget" can achieve
comparable or even better accuracy, an important consideration for deploying
cost-effective enterprise applications.
New training method boosts AI multimodal reasoning with smaller, smarter datasets
Recent advances in reinforcement learning with verifiable rewards (RLVR) have
significantly improved the reasoning abilities of large language models
(LLMs). RLVR trains LLMs to generate chain-of-thought (CoT) tokens (which
mimic the reasoning processes humans use) before generating the final answer.
This improves the model’s capability to solve complex reasoning tasks such as
math and coding. Motivated by this success, researchers have applied similar
RL-based methods to large multimodal models (LMMs), showing that the benefits
can extend beyond text to improve visual understanding and problem-solving
across different modalities. ... According to Zhang, the step-by-step process
fundamentally changes the reliability of the model's outputs. "Traditional
models often 'jump' directly to an answer, which means they explore only a
narrow portion of the reasoning space," he said. "In contrast, a
reasoning-first approach forces the model to explicitly examine multiple
intermediate steps... [allowing it] to traverse much deeper paths and arrive
at answers with far more internal consistency." ... The researchers also found
that token efficiency is crucial. While allowing a model to generate longer
reasoning steps can improve performance, excessive tokens reduce efficiency.
Their results show that setting a smaller "reasoning budget" can achieve
comparable or even better accuracy, an important consideration for deploying
cost-effective enterprise applications.Why Firms Can’t Ignore Agentic AI
The danger posed by agentic AI stems from its ability carry out specific tasks with limited oversight. “When you give autonomy to a machine to operate within certain bounds, you need to be confident of two things: That it has been provided with excellent context so it knows how to make the right decisions – and that it is only completing the task asked of it, without using the information it’s been trusted with for any other purpose,” James Flint, AI practice lead at Securys, said. Mike Wilkes, enterprise CISO, Aikido Security, describes agentic AI as “giving a black box agent the ability to plan, act, and adapt on its own.” “In most companies that now means a new kind of digital insider risk with highly-privileged access to code, infrastructure, and data,” he warns. When employees start to use the technology without guardrails, shadow agentic AI introduces a number of risks. ... Adding to the risk, agentic AI is becoming easier to build and deploy. This will allow more employees to experiment with AI agents – often outside IT oversight, creating new governance and security challenges, says Mistry. Agentic AI can be coupled with the recently open-sourced Model Context Protocol (MCP), a protocol released by Anthropic that provides an open standard for orchestrating connections between AI assistants and data sources. By streamlining the work of development and security teams, this can “turbocharge productivity,” but it comes with caveats, says Pieter Danhieux, co-founder and CEO of Secure Code Warrior.Why supply chains are the weakest link in today’s cyber defenses
One of the key reasons is that attackers want to make the best return on their
efforts, and have learned that one of the easiest ways into a well-defended
enterprise is through a partner. No thief would attempt to smash down the front
door of a well-protected building if they could steal a key and slip in through
the back. There’s also the advantage of scale: one company providing IT, HR,
accounting or sales services to multiple customers may have fewer resources to
protect itself, that’s the natural point of attack. ... When the nature of cyber
risks changes so quickly, yearly audits of suppliers can’t provide the most
accurate evidence of their security posture. The result is an ecosystem built on
trust, where compliance often becomes more of a comfort blanket. Meanwhile,
attackers are taking advantage of the lag between each audit cycle, moving far
faster than the verification processes designed to stop them. Unless
verification evolves into a continuous process, we’ll keep trusting paperwork
while breaches continue to spread through the supply chain. ... Technology alone
won’t fix the supply chain problem, and a change in mindset is also needed. Too
many boards are still distracted by the next big security trend, while
overlooking the basics that actually reduce breaches. Breach prevention needs to
be measured, reported and prioritized just like any other business KPI.
































