Daily Tech Digest - January 19, 2026


Quote for the day:

"Stop Judging people and start understanding people everyone's got a story" -- @PilotSpeaker



Stop calling it 'The AI bubble': It's actually multiple bubbles, each with a different expiration date

The AI ecosystem is actually three distinct layers, each with different economics, defensibility and risk profiles. Understanding these layers is critical, because they won't all pop at once. ... The most vulnerable segment isn't building AI — it's repackaging it. These are the companies that take OpenAI's API, add a slick interface and some prompt engineering, then charge $49/month for what amounts to a glorified ChatGPT wrapper. Some have achieved rapid initial success, like Jasper.ai, which reached approximately $42 million in annual recurring revenue (ARR) in its first year by wrapping GPT models in a user-friendly interface for marketers. But the cracks are already showing. ... Economic researcher Richard Bernstein points to OpenAI as an example of the bubble dynamic, noting that the company has made around $1 trillion in AI deals, including a $500 billion data center buildout project, despite being set to generate only $13 billion in revenue. The divergence between investment and plausible earnings "certainly looks bubbly," Bernstein notes. ... But infrastructure has a critical characteristic: It retains value regardless of which specific applications succeed. The fiber optic cables laid during the dot-com bubble weren’t wasted — they enabled YouTube, Netflix and cloud computing. Twenty-five years ago, the original dot-com bubble burst after debt financing built out fiber-optic cables for a future that had not yet arrived, but that future eventually did arrive, and the infrastructure was there waiting.


Modernizing Network Defense: From Firewalls to Microsegmentation

For many years, network security has been based on the concept of a perimeter defense, likened to a fortified boundary. The network perimeter functioned as a protective barrier, with a firewall serving as the main point of access control. Individuals and devices within this secured perimeter were considered trustworthy, while those outside were viewed as potential threats. The "perimeter-centric" approach was highly effective when data, applications, and employees were all located within the physical boundaries of corporate headquarters. In the current environment, however, this model is considered not only obsolete but also poses significant risks. ... Microsegmentation significantly mitigates the impact of cyberattacks by transitioning from traditional perimeter-based security to detailed, policy-driven isolation at the level of individual workloads, applications, or containers. By establishing secure enclaves for each asset, it ensures that if a device is compromised, attackers are unable to traverse laterally to other systems. ... Microsegmentation solutions offer detailed insights into application dependencies and inter-server traffic flows, uncovering long-standing technical debt such as unplanned connections, outdated protocols, and potentially risky activities that may not be visible to perimeter-based defenses. ... One significant factor deterring organizations from implementing microsegmentation is the concern regarding increased complexity. 


Human-in-the-loop has hit the wall. It’s time for AI to oversee AI

This is not a hypothetical future problem. Human-centric oversight is already failing in production. When automated systems malfunction — flash crashes in financial markets, runaway digital advertising spend, automated account lockouts or viral content — failure cascades before humans even realize something went wrong. In many cases, humans were “in the loop,” but the loop was too slow, too fragmented or too late. The uncomfortable reality is that human review does not stop machine-speed failures. At best, it explains them after the damage is done. Agentic systems raise the stakes dramatically. Visualizing a multistep agent workflow with tens or hundreds of nodes often results in dense, miles-long action traces that humans cannot realistically interpret. As a result, manually identifying risks, behavior drift or unintended consequences becomes functionally impossible. ... Delegating monitoring tasks to AI does not eliminate human accountability. It redistributes it. This is where trust often breaks down. Critics worry that AI governing AI is like trusting the police to govern themselves. That analogy only holds if oversight is self-referential and opaque. The model that works is layered, with a clear separation of powers. ... Humans shift from reviewing outputs to designing systems. They focus on setting operating standards and policies, defining objectives and constraints, designing escalation paths and failure modes, and owning outcomes when systems fail.


Building leaders in the age of AI

The leaders who end up thriving in the AI era will be those who blend human depth with digital fluency. They will use AI to think with them, not for them. And they will treat this AI moment not as a threat to their leadership but as an opportunity to focus on those elements of their portfolios that only humans can excel at. ... Leaders will need to give teams a set of guardrails (clear values and decision rights) and establish new definitions of quality while fostering a sense of trust and collaboration as new challenges emerge and business conditions evolve. ... Aspiration, judgment, and creativity are “only human” leadership traits—and the characteristics that can provide an irreplaceable competitive edge, especially when amplified using AI. It’s therefore incumbent upon organizations to actively identify and develop the individuals who demonstrate critical intrinsics like resilience, eagerness to learn from mistakes, and the ability to work in teams that will increasingly include both humans and AI agents. ... Organizations must actively cultivate core leadership qualities such as wisdom, empathy, and trust—and they must give the development of these attributes the same attention they do to the development of new IT systems or operating models. That will mean providing time for leaders to do the inner work required to lead others effectively—that is, reflecting, sharing insights with other C-suite leaders, and otherwise considering what success will mean for themselves and the organization.


The Rising Phoenix of Software Engineering

Software is undergoing a tectonic transformation. Modern applications are no longer hand-crafted from scratch. They are assembled from third-party components, APIs, open-source packages, machine-learning models, and now AI-generated snippets. Artificial intelligence, low-code tools, Open-Source Software (OSS), reusable libraries have made the act of writing new code less central to building software than ever before. ... In this new era, the primary challenge is not about builder software faster, cheaper, or more feature-rich. It is how to engineer software safely and predictably in a hostile ecosystem. ... Software engineering, as a discipline, must rise again — not as a metaphor for resilience, but as a mandate for survival. ... The future does not eliminate developers or coders. Assembling, customizing, and scripting third-party components will remain critical. But the accountability layer must shift upward, to professionals trained to reason about system safety, dependencies, and security by design. In other words, software engineers must reemerge as true engineers responsible for understanding not only how their code works, but how and where it runs… and most critically how to secure it. ... To engineer software responsibly, practitioners must model threats, evaluate anti-tamper capabilities, and verify that each dependency meets a baseline of assurance. These tasks were historically reserved for penetration testers or quality assurance (QA) teams. 


The concerning cyber-physical security disconnect

The background of many physical security professionals is in military and law enforcement, which change much slower, but are known for extensive training. The nature of the threats they need to defend against is evolving at a slower pace, and destructive, kinetic threats remain a primary concern. ... The focus of cybersecurity is much more on the insides of an organization. Detection is supposed to catch attackers lurking on compromised devices. Response activities have to consider the entire infrastructure rather than individual hosts. Security measures are spread out across the network, taking a defense-in-depth approach. Physical security is much more outward looking, trying to prevent threats from entering. Detection systems exist within premises, but focus on the outer layers. Response activities are focused on evicting individual threats or denying their access. The majority of security efforts focuses on the perimeter. ... Companies often handle both topics in different teams. Conferences and publications may feature both topics, but often focus on one and rarely address their interdependence. Security assessments like pentests and red team exercises sometimes include a physical component that tends to focus on social engineering without involving deep physical security expertise. ... Risks, especially in the form of human threat actors, will always look for the easiest way to materialize. Therefore, they will attack physical assets via their digital components and vice versa, if these flanks are not protected.


Architecting Agility: Decoupled Banking Systems Will Be the Key to Success in 2026

The banking industry is undergoing an evolutionary and market-driven shift. Digital banking systems, once rigid and monolithic, are being reimagined through decoupled architecture, AI-driven intelligence, programmatic technology consumption, and fintech innovation and partnerships. ... Delay is no longer an option — the future of banking is already being built today. To capitalize on these innovations, tech leaders must prioritize digital core banking agility, ensuring integration with new innovations and adapting to evolving market demands. ... Identify suspicious patterns in real time. As illustrated in the figure, a decoupled risk analytics gateway and prompt engine streamlines regulatory reporting and ensures adherence to evolving rules (regtech). Whitney Morgan, vice president at Skaleet, a fintech provider, states that generative AI takes this a notch further by automating regulatory reporting and accelerating product development. ... AI-enabled risk management empowers banks to detect anomalies across large translation datasets with the speed and accuracy that manual processes can’t match. Risk modeling and stress testing will enhance credit risk scoring, market risk simulations, and scenario analysis that drive preemptive and revenue options. ... The banking and financial services innovation race, with challenges in adoption and capturing market advantages, beckons leaders to be nimble and, at the same time, stay focused on the fundamentals. CIOs, CTOs, and other tech leaders can take proactive steps to strike the right balance.


Key Management Testing: The Most Overlooked Pillar of Crypto Security

The majority of security testing in crypto projects focuses on code correctness or operational attacks. Key management, however, is mainly considered a procedural issue rather than a technical problem. This is a dangerous false belief. Entropy sources, hardware integrity, and cryptographic integrity are key to generating. Ineffective randomness, broken device software or a corrupted environment may lead to keys that seem valid but are appallingly weak to attack. The testing mechanisms used to create new wallet addresses for users must be watertight when an exchange generates millions of new addresses. Testing should also be done on key storage. ... The recovery process is one of the most vulnerable areas of key management, yet it is discussed least. Backup and restoration are prone to human error, improperly configured storage, or unsafe transmission. The unfortunate fact about crypto is that recovery mechanisms can be either a saviour or a disaster. Recovery phrases, encrypted backups, and distributed shares need to be repeatedly tested in a real-world, adversarial environment. ... End-to-end lifecycle testing, automatic verification of key states, automated attack simulations and automated recovery protocols that self-heal will be the order of the day. The industry has already become such that key management is no longer a concealed or even supporting part of the security strategies. 


Inside the Chip: How Hardware Root of Trust Shifts the Odds Back to Cyber Defenders

Defenders often lack direct control or visibility into the hardware layer where workloads actually execute. This abstraction can obscure low-level threats, allowing attackers to manipulate telemetry, disable software protections, or persist beyond reboots. Crucially, modern attacks are not brute force attempts to break encryption or overwhelm defences. They exploit the assumptions built into how systems start, update, and prove what’s genuine. ... At the centre of this shift is Hardware Root of Trust (HRoT): a security architecture that embeds trust directly into the hardware layer of a device. US National Institute of Standards and Technology (NIST) defines it as “an inherently trusted combination of hardware and firmware that maintains the integrity of information.” In practice, HRoT serves as the anchor for system trust from the moment power is applied. ... For CISOs, HRoT represents an opportunity to strengthen resilience, meet regulatory demands, and finally realise true zero trust. From a resilience standpoint, it changes the balance between prevention and response. By validating integrity from power-on and continuously during operation, it reduces reliance on post-incident investigation and recovery. Compromised devices and systems are stopped early, limiting blast radius and disruption. Regulators are already reinforcing this direction. Frameworks such as the US Department of Defense’s CMMC explicitly highlight HRoT as a stronger foundation for assurance. 


What AI skills job seekers need to develop in 2026

One of the earliest AI skills involved prompt engineering — being able to get to the necessary AI-generated results by using the right questions. But that baseline skill is being pushed aside by “context engineering.” Think of context engineering as prompt engineering on steroids; it involves developing prompts that can deliver consistent and predictive answers. Ideally, “everytime you ask the same question, you always get the same answer,” said Bekir Atahan, vice president at Experis Services, a division of Manpower Group. That skill is critical because AI models are changing quickly, and the answers they spout out can differ from day to day. Context engineering is aimed at ensuring consistent outputs despite a rapidly evolving AI ecosystem. ... “Beyond algorithms and coding, the next wave of AI talent must bridge technology, governance and organizational change. The most valuable AI skill in 2026 isn’t coding, it’s building trust,” Seth said. Along those lines, he recommended that job seekers immerse themselves in the technology beyond simply taking a class. “Instead of a course, go to any conference,” Seth said. ... In hiring, genuine AI capability shows up through curiosity and real experience, Blackford said. “Strong candidates can talk honestly about something they tried, what did not work, and what they learned,” he said ... “Things are evolving at such a fast pace that there will be no perfect set of skills,” said Seth. “I would say more than skills, attitudes are more important — that adaptability to change, how quick you are to learn things.”

Daily Tech Digest - January 18, 2026


Quote for the day:

"Surround yourself with great people; delegate authority; get out of the way" -- Ronald Reagan



Data sovereignty: an existential issue for nations and enterprises

Law-making bodies have in recent years sought to regulate data flows to strengthen their citizens’ rights – for example, the EU bolstering individual citizens’ privacy through the General Data Protection Regulation (GDPR). This kind of legislation has redefined companies’ scope for storing and processing personal data. By raising the compliance bar, such measures are already reshaping C-level investment decisions around cloud strategy, AI adoption and third-party access to their corporate data. ... Faced with dynamic data sovereignty risks, enterprises have three main approaches ahead of them: First, they can take an intentional risk assessment approach. They can define a data strategy addressing urgent priorities, determining what data should go where and how it should be managed - based on key metrics such as data sensitivity, the nature of personal data, downstream impacts, and the potential for identification. Such a forward-looking approach will, however, require a clear vision and detailed planning. Alternatively, the enterprise could be more reactive and detach entirely from its non-domestic public cloud service providers. This is riskier, given the likely loss of access to innovation and, worse, the financial fallout that could undermine their pursuit of key business objectives. Lastly, leaders may choose to do nothing and hope that none of these risks directly affects them. This is the highest-risk option, leaving no protection from potentially devastating financial and reputational consequences of an ineffective data sovereignty strategy.


Verification Debt: When Generative AI Speeds Change Faster Than Proof

Software delivery has always lived with an imbalance. It is easier to change a system than to demonstrate that the change is safe under real workloads, real dependencies, and real failure modes. ... The risk is not that teams become careless. The risk is that what looks correct on the surface becomes abundant while evidence remains scarce. ... A useful name for what accumulates in the mismatch is verification debt. It is the gap between what you released and what you have demonstrated, with evidence gathered under conditions that resemble production, to be safe and resilient. Technical debt is a bet about future cost of change. Verification debt is unknown risk you are running right now. Here, verification does not mean theorem proving. It means evidence from tests, staged rollouts, security checks, and live production signals that is strong enough to block a release or trigger a rollback. It is uncertainty about runtime behavior under realistic conditions, not code cleanliness, not maintainability, and not simply missing unit tests. If you want to spot verification debt without inventing new dashboards, look at proxies you may already track. ... AI can help with parts of verification. It can suggest tests, propose edge cases, and summarize logs. It can raise verification capacity. But it cannot conjure missing intent, and it cannot replace the need to exercise the system and treat the resulting evidence as strong enough to change the release decision. Review is helpful. Review is evidence of readability and intent.


Executive-level CISO titles surge amid rising scope strain

Executive-level CISOs were more likely to report outside IT than peers with VP or director titles, according to the findings. The report frames this as part of a broader shift in how organisations place accountability for cyber risk and oversight. The findings arrive as boards and senior executives assess cyber exposure alongside other enterprise risks. The report links these expectations to the need for security leaders to engage across legal, risk, operations and other functions. ... Smaller organisations and industries with leaner security teams showed the highest levels of strain, the report says. It adds that CISOs warn these imbalances can delay strategic initiatives and push teams towards reactive security operations. The report positions this issue as a management challenge as well as a governance question. It links scope creep with wider accountability and higher expectations on security leaders, even where budgets and staffing remain constrained. ... Recruiters and employers have watched turnover trends closely as demand for senior security leadership has remained high across many sectors. The report suggests that title, scope and reporting structure form part of how CISOs evaluate roles. ... "The demand for experienced CISOs remains strong as the role continues to become more complex and more 'executive'," said Martano. "Understanding how organizations define scope, reporting structure, and leadership access and visibility is critical for CISOs planning their next move and for companies looking to hire or retain security leaders."


What’s in, and what’s out: Data management in 2026 has a new attitude

Data governance is no longer a bolt-on exercise. Platforms like Unity Catalog, Snowflake Horizon and AWS Glue Catalog are building governance into the foundation itself. This shift is driven by the realization that external governance layers add friction and rarely deliver reliable end-to-end coverage. The new pattern is native automation. Data quality checks, anomaly alerts and usage monitoring run continuously in the background. ... Companies want pipelines that maintain themselves. They want fewer moving parts and fewer late-night failures caused by an overlooked script. Some organizations are even bypassing pipes altogether. Zero ETL patterns replicate data from operational systems to analytical environments instantly, eliminating the fragility that comes with nightly batch jobs. ... Traditional enterprise warehouses cannot handle unstructured data at scale and cannot deliver the real-time capabilities needed for AI. Yet the opposite extreme has failed too. The highly fragmented Modern Data Stack scattered responsibilities across too many small tools. It created governance chaos and slowed down AI readiness. Even the rigid interpretation of Data Mesh has faded. ... The idea of humans reviewing data manually is no longer realistic. Reactive cleanup costs too much and delivers too little. Passive catalogs that serve as wikis are declining. Active metadata systems that monitor data continuously are now essential.


How Algorithmic Systems Automate Inequality

The deployment of predictive analytics in public administration is usually justified by the twin pillars of austerity and accuracy. Governments and private entities argue that automated decision-making systems reduce administrative bloat while eliminating the subjectivity of human caseworkers. ... This dynamic is clearest in the digitization of the welfare state. When agencies turn to machine learning to detect fraud, they rarely begin with a blank slate, training their models on historical enforcement data. Because low-income and minority populations have historically been subject to higher rates of surveillance and policing, these datasets are saturated with selection bias. The algorithm, lacking sociopolitical context, interprets this over-representation as an objective indicator of risk, identifying correlation and deploying it as causality. ... Algorithmic discrimination, however, is diffuse and difficult to contest. A rejected job applicant or a flagged welfare recipient rarely has access to the proprietary score that disqualified them, let alone the training data or the weighting variable—they face a black box that offers a decision without a rationale. This opacity makes it nearly impossible for an individual to challenge the outcome, effectively insulating the deploying organisation from accountability. ... Algorithmic systems do not observe the world directly; they inherit their view of reality from datasets shaped by prior policy choices and enforcement practices. To assess such systems responsibly requires scrutiny of the provenance of the data on which decisions are built and the assumptions encoded in the variables selected.


DevSecOps for MLOps: Securing the Full Machine Learning Lifecycle

The term "MLSecOps" sounds like consultant-speak. I was skeptical too. But after auditing ML pipelines at eleven companies over the past eighteen months, I've concluded we need the term because we need the concept — extending DevSecOps practices across the full machine learning lifecycle in ways that account for ML-specific threats. The Cloud Security Alliance's framework is useful here. Securing ML systems means protecting "the confidentiality, integrity, availability, and traceability of data, software, and models." That last word — traceability — is where most teams fail catastrophically. In traditional software, you can trace a deployed binary back to source code, commit hash, build pipeline, and even the engineer who approved the merge. ... Securing ML data pipelines requires adopting practices that feel tedious until the day they save you. I'm talking about data validation frameworks, dataset versioning, anomaly detection at ingestion, and schema enforcement like your business depends on it — because it does. Last September, I worked with an e-commerce company deploying a recommendation model. Their data pipeline pulled from fifteen different sources — user behavior logs, inventory databases, third-party demographic data. Zero validation beyond basic type checking. We implemented Great Expectations — an open-source data validation framework — as a mandatory CI check. 


Autonomous Supply Chains: Catalyst for Building Cyber-Resilience

Autonomous supply chains are becoming essential for building resilience amid rising global disruptions. Enabled by a strong digital core, agentic architecture, AI and advanced data-driven intelligence, together with IoT and robotics, they facilitate operations that continuously learn, adapt and optimize across the value chain. ... Conventional thinking suggests that greater autonomy widens the attack surface and diminishes human oversight turning it into a security liability. However, if designed with cyber resilience at its core, autonomous supply chain can act like a “digital immune system,” becoming one of the most powerful enablers of security. ... As AI operations and autonomous supply chains scale, traditional perimeter simply won’t work. Organizations must adopt a Zero Trust security model to eliminate implicit trust at every access point. A Zero Trust model, centered on AI-driven identity and access management, ensures continuous authentication, network micro-segmentation and controlled access across users, devices and partners. By enforcing “never trust, always verify,” organizations can minimize breach impact and contain attackers from freely moving across systems, maintaining control even in highly automated environments. ... Autonomy in the supply chain thrives on data sharing and connectivity across suppliers, carriers, manufacturers, warehouses and retailers, making end-to-end visibility and governance vital for both efficiency and security. 


When enterprise edge cases become core architecture

What matters most is not the presence of any single technology, but the requirements that come with it. Data that once lived in separate systems now must be consistent and trusted. Mobile devices are no longer occasional access points but everyday gateways. Hiring workflows introduce identity and access considerations sooner than many teams planned for. As those realities stack up, decisions that once arrived late in projects are moving closer to the start. Architecture and governance stop being cleanup work and start becoming prerequisites. ... AI is no longer layered onto finished systems. Mobile is no longer treated as an edge. Hiring is no longer insulated from broader governance and security models. Each of these shifts forces organizations to think earlier about data, access, ownership and interoperability than they are used to doing. What has changed is not just ambition, but feasibility. AI can now work across dozens of disparate systems in ways that were previously unrealistic. Long-standing integration challenges are no longer theoretical problems. They are increasingly actionable -- and increasingly unavoidable. ... As a result, integration, identity and governance can no longer sit quietly in the background. These decisions shape whether AI initiatives move beyond experimentation, whether access paths remain defensible and whether risk stays contained or spreads. Organizations that already have a clear view of their data, workflows and access models will find it easier to adapt. 


Why New Enterprise Architecture Must Be Built From Steel, Not Straw

Architecture must reflect future ambition. Ideally, architects build systems with a clear view of where the product and business are heading. When a system architecture is built for the present situation, it’s likely lacking in flexibility and scalability. That said, sound strategic decisions should be informed by well-attested or well-reasoned trends, not just present needs and aspirations. ... Tech leaders should avoid overcommitting to unproven ideas—i.e., not get "caught up" in the hype. Safe experimentation frameworks (from hypothesis to conclusion) reduce risk by carefully applying best practices to testing out approaches. In a business context with something as important as the technology foundation the organization runs in, do not let anyone mischaracterize this as timidity. Critical failure is a career-limiting move, and potentially an organizational catastrophe. ... The art lies in designing systems that can absorb future shifts without constant rework. That comes from aligning technical decisions not only with what the company is today, but also what it intends to become. Future-ready architecture isn’t the comparatively steady and predictable discipline it was before AI-enabled software features. As a consequence, there’s wisdom in staying directional, rather than architecting for the next five years. Align technical decisions with long-term vision but built with optionality wherever possible. 


Why Engineering Culture Is Everything: Building Teams That Actually Work

The culture is something that is a fact and it's also something intrinsic with human beings. We're people, we have a background. We were raised in one part of the world versus another. We have the way that we talk and things that we care about. All those things influence your team indirectly and directly. It's really important, you as a leader, to be aware that as an engineer, I use a lot of metaphors from monitoring and observability. We always talk about known knowns, known unknowns, and unknown unknowns. Those are really important to understand on a systems level, period, because your social technical system is also a system. The people that you work with, the way you work, your organization, it's a system. And if you're not aware of what are the metrics you need to track, what are the things that are threats to it, the good old strengths, weaknesses, opportunities, and threats. ... What we can learn from other industries is their lessons. Again, we are now on yet another industrial revolution. This time it's more of a knowledge revolution. We can learn from civil engineering like, okay, when the brick was invented, that was a revolution. When the brick was invented, what did people do in order to make sure that bricks matter? That's a fascinating and very curious story about the Freemasons. People forget the Freemasons were a culture about making sure that these constructions techniques, even more than the technologies, the techniques, were up to standards. 

Daily Tech Digest - January 17, 2026


Quote for the day:

"Success does not consist in never making mistakes but in never making the same one a second time." -- George Bernard Shaw



Expectations from AI ramp up as investors eye returns in 2026

Billions in investments and a concerted focus on the tech over the past few years has led to artificial intelligence (AI) completely transforming how major global industries work. Now, investors are finally expecting to see some returns. ... Investors will no longer be satisfied with AI’s potential future capabilities – they want measurable returns on investment (ROI), says Jiahao Sun, the CEO of Flock.ie, a platform that allows users to build, train and deploy AI models in a decentralised manner. AI investment is entering its “show me the money era”, he says. This isn’t to say that investments into AI will pause, but that investors will begin prioritising critical areas that give guaranteed returns. These could include agentic AI platforms that enable multi-agent orchestration; AI-native infrastructures built for scale, security and interoperability; data modernisation tools that unlock the full potential of unstructured data; and AI observability and safety tools that monitor, govern and refine agent behaviour in real time, explains Neeraj Abhyankar, the VP of Data and AI at R Systems. ... “Single-purpose tools will be absorbed into unified AI platforms. The era of juggling 10 different AI products is ending and the race to offer a complete, integrated experience will intensify,” he adds. Meanwhile, some experts say that the EU’s AI Act will – for better or for worse – prohibit European firms from experimenting with high-risk use cases for AI.


The Next S-Curve of Cybersecurity: Governing Trust in a New Converging Intelligence Economy

Cybersecurity has crossed a threshold where it no longer merely protects technology ~ it governs trust itself. In an era defined by AI-driven decision-making, decentralized financial systems, cloud-to-edge computing, and the approaching reality of quantum disruption, cyber risk is no longer episodic or containable. It is continuous, compounding, and enterprise-defining. What changed in 2025 wasn’t just the threat landscape. It was the architecture of risk. Identity replaced networks as the dominant attack surface. Software supply chains emerged as systemic liabilities. Machine intelligence ~ on both sides of the attack began evolving faster than the controls designed to govern it. For boards, investors, and executives, this marked the end of cybersecurity as a control function and the beginning of cybersecurity as a strategic mandate. ... The next S-curve of cybersecurity is not driven by better tooling. It is driven by a shift in how trust is architected and governed across a converging ecosystem. This new curve is defined by: Identity-centric security rather than network-centric defense; Data-aware protection instead of application-bound controls; Continuous assurance rather than point-in-time audits; and Integration with enterprise risk, governance, and capital strategy Cybersecurity evolves from a defensive posture into a trust architecture discipline ~ one that governs how intelligence, identity, data, and decisions interact at scale.


Why Mental Fitness Is Leadership's Next Frontier

The distinction Craze draws between mental health and mental fitness is crucial. Mental health, he explains, is ultimately about functioning—being sufficiently free from psychological injury or mental illness to show up and perform one's job. "Your mental health or illness is a private matter between yourself, and perhaps your family or physician, and is a matter of respecting your individual rights," he says. Mental fitness, by contrast, is about capacity. "Assuming you are mentally healthy enough to show up and perform your job, then mental fitness is all about how well your mind performs under load, over time, and in conditions of uncertainty," Craze explains. "Being mentally healthy is a baseline. Being mentally fit is what allows leaders to think clearly at hour ten, stay composed in conflict, and recover quickly after setbacks rather than slowly eroding away," he says. Here, the comparison to elite athletics is instructive. In professional sports, no one confuses being injury-free with being competition-ready. Leadership has been slower to make that distinction, even as today’s executives face sustained cognitive and emotional demands that would have been unthinkable a generation ago. ... One of the most persistent myths in leadership development, according to Craze, is the idea that thinking happens in some abstract cognitive space, detached from the body. "In reality, every act of judgment, attention and self-control has an underlying physiological component and cost," he says. 


Taking the Technical Leadership Path

Without technical alignment, individuals constantly touch the same codebase, adding their feature in the simplest way (for them) but often they do this without ensuring the codebase is kept consistent. Over time accidental complexity grows such as having five different libraries that do the same job, or seven different implementations of how an email or push notification is sent and when someone wants to make a future change to that area, their work is now much harder. ... There are plenty of resources available to develop leadership skills. Kua advised to break broader leadership skills into specific ones, such as coaching, mentoring, communicating, mediating, influencing, etc. Even when someone is not a formal leader, there are daily opportunities to practice these skills in the workplace, he said. ... Formal technical leaders are accountable for ensuring teams have enough technical leadership. One way of doing this is to cultivate an environment where everyone is comfortable stepping up and demonstrating technical leadership. When you do this well, this means everyone can demonstrate informal technical leadership. Formal leaders exist because not all teams are automatically healthy or high-performing. I’m sure every technical person can remember a team they’ve been on with two engineers constantly debating about which approach to take, and wish someone had stepped in to help the team reach a decision. In an ideal world, a formal leader wouldn’t be necessary, but it’s rare that teams live in the perfect world.


From model collapse to citation collapse: risks of over-reliance on AI in the academy

Model collapse is the slow erosion of a generative AI system grounded in reality as it learns more and more from machine-generated data rather than from human-generated content. As a result of model collapse, the AI model loses diversity in its outputs, reinforces its misconceptions, increases its confidence in its hallucinations and amplifies its biases. ... Among all the writing tasks involved in research, GenAI appears to be disproportionately good at writing literature reviews. ChatGPT and Google Gemini both have deep research features that try to take a deep dive into the literature on a topic, returning heavily sourced and relatively accurate syntheses of the related research, while typically avoiding the well-documented tendency to hallucinate sources altogether. In some ways, it should not be too surprising that these technologies thrive in this area because literature reviews are exactly the sort of thing GenAI should be good at: textual summaries that stay pretty close to the source material. But here is my major concern: while nothing is fundamentally wrong with the way GenAI surfaces sources for literature reviews, it risks exacerbating the citation Matthew effect that tools like Google Scholar have caused. Modern AI models largely thrive on a snapshot of the internet circa 2022. In fact, I suspect that verifiably pre-2022 datasets will become prized sources for future models, largely untainted by AI-generated content, in much the same way that pre-World War II steel is prized for its lack of radioactive contamination from nuclear testing. 


Why is Debugging Hard? How to Develop an Effective Debugging Mindset

Here’s how most developers debug code: Something is broken; Let me change the line; Let’s refresh (wishing the error would go away); Hmm… still broken!; Now, let me add a console.log(); Let me refresh again (Ah, this time it may…); Ok, looks like this time it worked! This is reaction-based debugging. It’s like throwing a stone in the dark or finding a needle in a haystack. It feels busy, it sounds productive, but it’s mostly guessing. And guessing doesn’t scale in programming. This approach and the guessing mindset make debugging hard for developers. The lack of a methodology and solid approach makes many devs feel helpless and frustrated, which makes the process feel much more difficult than coding. This is why we need a different mental model, a defined skillset to master the art of debugging. ... Good debuggers don’t fight bugs. They investigate them. They don’t start with the mindset of “How do I fix this?”. They start with, “Why must this bug exist?” This one question changes everything. When you ask about the existence of a bug, you go back to the history to collect information about the code, its changes, and its flow. Then, you feed this information through a “mental model” to make decisions that lead you to the fix. ... Once the facts are clear and assumptions are visible, the debugging makes its way forward. Now you’ll need to form a hypothesis. A hypothesis is a simple cause-and-effect statement: If this assumption is wrong, then the behaviour makes sense. If not, provide a fix.


Promptware Kill Chain – Five-Step Kill Chain Model for Analyzing Cyberthreats

While the security industry has focused narrowly on prompt injection as a catch-all term, the reality is far more complex. Attacks now follow systematic, sequential patterns: initial access through malicious prompts, privilege escalation by bypassing safety constraints, establishing persistence in system memory, moving laterally across connected services, and finally executing their objectives. This mirrors how traditional malware campaigns unfold, suggesting that conventional cybersecurity knowledge can inform AI security strategies. ... The promptware kill chain begins with Initial Access, where attackers insert malicious instructions through prompt injection—either directly from users or indirectly through poisoned documents retrieved by the system. The second phase, Privilege Escalation, involves jailbreaking techniques that bypass safety training designed to refuse harmful requests. ... Traditional malware achieves persistence through registry modifications or scheduled tasks. Promptware exploits the data stores that LLM applications depend on. Retrieval-dependent persistence embeds payloads in data repositories like email systems or knowledge bases, reactivating when the system retrieves similar content. Even more potent is retrieval-independent persistence, which targets the agent’s memory directly, ensuring the malicious instructions execute on every interaction regardless of user input.


AI SOC Agents Are Only as Good as the Data They Are Fed

If your telemetry is fragmented, your schemas are inconsistent, or your context is missing, you won’t get faster responses from AI SOC agents. You’ll just get faster mistakes. These agents are being built to excel at cybersecurity analysis and decision support. They are not constructed to wrangle data collection, cleansing, normalization, and governance across dozens of sources. ... Modern SOCs integrate telemetry from EDRs, cloud providers, identity, networks, SaaS apps, data lakes, and more. Normalizing all that into a common schema eliminates the constant “translation tax.” An agent that can analyze standardized fields once, and doesn’t have to re-learn CrowdStrike vs. Splunk Search Processing Language vs. vendor-specific JavaScript Object Notation, will make faster, more reliable decisions. ... If the agent must “crawl back” into five source systems to enrich an alert on its own, latency spikes and success rates drop. The right move is to centralize, normalize, and clean security data into an accessible store, like a data lake, for your AI SOC agents and continue streaming a distilled, security-relevant subset to the Security Information and Event Management (SIEM) platform for detections and cybersecurity analysts. Let the SIEM be the place where detections originate; let the lake be the place your agents do their deep thinking. The problem is that the industry’s largest SIEM, Endpoint Detection and Response (EDR), and Security Orchestration, Automation, and Response (SOAR) platforms are consolidating into vertically integrated ecosystems. ...”


IT portfolio management: Optimizing IT assets for business value

The enterprise’s most critical systems for conducting day-to-day business are a category unto themselves. These systems may be readily apparent, or hidden deep in a technical stack. So all assets should be evaluated as to how mission-critical they are. ... The goal of an IT portfolio is to contain assets that are presently relevant and will continue to be relevant well into the future. Consequently, asset risk should be evaluated for each IT resource. Is the resource at risk for vendor sunsetting or obsolescence? Is the vendor itself unstable? Does IT have the on-staff resources to continue running a given system, no matter how good it is (a custom legacy system written in COBOL and Assembler, for example)? Is a particular system or piece of hardware becoming too expense to run? Do existing IT resources have a clear path to integration with the new technologies that will populate IT in the future? ... Is every IT asset pulling its weight? Like monetary and stock investments, technologies under management must show they are continuing to produce measurable and sustainable value. The primary indicators of asset value that IT uses are total cost of ownership (TCO) and return on investment (ROI). TCO is what gauges the value of an asset over time. For instance, investments in new servers for the data center might have paid off four years ago, but now the data center has an aging bay of servers with obsolete technology and it is cheaper to relocate compute to the cloud.


Ransomware activity never dies, it multiplies

One of the most significant findings in the study involves extortion campaigns that do not rely on encryption. These attacks focus on stealing data and threatening to publish it, skipping the deployment of ransomware entirely. Encryption based attacks remained just above 4,700 incidents annually. When data theft extortion is included, total extortion incidents reached 6,182 in 2025. That represents a 23% increase compared with 2024. Snakefly, which runs the Cl0p ransomware operation, played a major role in this shift. These actors exploited vulnerabilities in widely used enterprise software to extract data at scale. Victims included large organizations in government and industry, with some campaigns affecting hundreds of companies through a single flaw. ... A newer ransomware strain tracked as Warlock drew attention due to its tooling and infrastructure. First observed in mid 2025, Warlock attacks exploited a zero day vulnerability in Microsoft SharePoint and used DLL sideloading for payload delivery. Analysis linked Warlock to tooling previously associated with Chinese espionage activity, including signed drivers and custom command frameworks. Some ransomware payloads appeared to be modified versions of leaked LockBit code, combined with older malware components. The study notes overlaps between ransomware activity and long running espionage campaigns, where ransomware deployment may serve operational or financial goals within broader intrusion efforts.

Daily Tech Digest - January 16, 2026


Quote for the day:

"Common sense is something that everyone needs, few have, and none think they lack" -- Benjamin Franklin



If you think agentic AI is a challenge, you’re not ready for what’s coming

The convergence of technology is happening all at once. You’ve got new processes being put in place while simultaneously replacing legacy infrastructure. You’ve got new technology, new talent being rolled into this convergence. Meanwhile, physical AI and quantum are coming quickly on top of agentic. Adaptability is the new job security. The ability to adapt is the most important skill for employees and the most important organizational differentiator. Organizations that can adapt quickly to new technology, redefining processes and training — that’s how they’ll differentiate. The ones that can’t will fall behind. ... It’s becoming not a technology issue as much as a business and process issue. The technology — whether AI, agentic AI, physical AI, or quantum — mostly exists to solve today’s problems. The issue is training, people, and adoption. ... Some industries, like financial services and healthcare [and] precision medicine — financial services has over-invested for decades in data and data quality for compliance reasons. They can use it for AI and quantum. Precision medicine is another category with high data quality. But without the right data, infrastructure, and sandbox, you’ll spread yourself too thin. You may try things, but it doesn’t get you value. Without a defined use case and focus area, you create innovation theater. Companies are getting focused on that first step: What use case am I trying to solve? 


AI Is Compressing the Coding Layer: Here's What Developers Do Next

One of the most encouraging developments in 2025 has been AI's ability to accelerate developer progression and skill growth. In our Q4 survey, 74% of developers said AI strengthened their technical skills. As lower-level execution becomes increasingly automated, developers who can work across systems, evaluate tradeoffs, and guide AI-driven workflows are progressing faster than in previous cycles. ... More than half (55%) also expect AI proficiency to accelerate progression and compensation. This reflects a rising demand for talent that can pair technical depth with architectural and systems thinking. ... Engineering teams are beginning to resemble higher-skill strategic units with stronger cross-functional alignment and architectural leadership. 58% of developers expect teams to become smaller and leaner next year as entry-level coding tasks are increasingly automated. Similarly, more than half (58%) of project managers report that 10-30% of project tasks could be handled by AI-driven workflows in 2026, including documentation generation, automated testing, code completion/refactoring, and requirements/user story drafting. These aren't the most visible tasks, but they've historically consumed a disproportionate share of time. ... To thrive in 2026 and beyond, developers should build competency in orchestrating AI workflows, invest in architectural and systems design literacy, and strengthen their fluency in data engineering, security, and cloud foundations.


Insider risk in an age of workforce volatility

Economic pressures, AI-driven job displacement, and relentless organizational churn are driving insider risk to its highest level in years. Workforce instability erodes loyalty and heightens grievances. The accelerating deployment of powerful new tools, such as AI agents, amplifies the threats from within, both human and machine. ... This surge, up significantly from prior years, creates fertile ground for disgruntlement: financial stress, resentment over automation, and opportunistic behavior, from negligence and careless data handling to deliberate malevolent actions like data exfiltration and credential monetization. ... They are becoming exploitable vectors for silent data exfiltration, disruption, or unintended catastrophe. This is particularly concerning when volatility reduces human oversight and rushes deployment without commensurate controls. Palo Alto Networks’ 2026 cybersecurity predictions emphasize that these agents introduce vulnerabilities such as goal hijacking, tool misuse, prompt injection, and shadow deployment, often amplified by the very churn that drives their adoption across multinational organizations. Security leaders are taking note. ... There is no doubt that such anxiety from ongoing layoffs and role uncertainty can lead to nervous mistakes, privilege hoarding, or rushed workarounds that expose data without intent to harm. Yet harm is actualized. The result is a heightened insider risk landscape that is amplified when the interplay between human churn and machine proliferation is overlooked.


Creating Trust Through Data Is a Long Game — Advantage Solutions CDO

“Trust starts with the rapport with individuals. It starts with listening. It doesn’t start with building solutions.” She highlights that facts alone don’t solve decision-making challenges. Business intuition still matters — but it must be balanced with truth derived from data. “Sometimes the facts alone aren’t enough. There’s a balance between data and the business-led gut experience. All of it is important.” Trust requires time, consistency, and transparency. ... O’Hazo frames AI not as a disruption, but as a spotlight. “AI is almost spotlighting the need for foundational data.” The reason: modern organizations need to answer multidimensional questions, not isolated ones. “It’s no longer a singular flat question. It’s ‘How is X related to Y, and what are the factors that drive growth?’ To answer that, you need data from so many different functions organized and architected the right way.” This interconnection does more than support analytics; it transforms relationships across the business. “When you start to interconnect the data, you naturally and organically have meaningful conversations across functions.” ... Turajski raises the common phrase “source of truth,” asking whether AI has changed how organizations think about it. O’Hazo’s response is clear: AI doesn’t rewrite the rules; it reveals the gaps. “AI is spotlighting, sometimes unfavorably, where the pre-work on the data foundation hasn’t accelerated enough.” This wake-up call has elevated data readiness to board-level priority.


The workforce shift — why CIOs and people leaders must partner harder than ever

For the last decade or so, digital transformation has been framed as a technology challenge. New platforms. Cloud migrations. Data lakes. APIs. Automation. Security layered on top. It was complex, often messy and rarely finished — but the underlying assumption stayed the same: Humans remained at the center of work, with technology enabling them. ... AI is just technology. But it feels human because it has been designed to interact with us in human ways. Large language models combined with domain data create the illusion that AI can do anything. Maybe one day it will. Right now, what it can do is expose how unprepared most organizations are for the scale and pace of change it brings. We are all chasing competitive advantages — revenue growth, margin improvement, improving resilience — and AI is being positioned as the shortcut. But unlike previous waves of automation, this one does not sit neatly inside a single function. ... Perception becomes reality very quickly inside organizations. If people believe AI is a colleague, what does that mean for accountability, trust and decision-making? Who owns outcomes when work is split between humans and machines? These are not abstract questions — they show up in performance, morale and risk. ... For years, organizations have layered technology on top of broken processes. Sometimes that was a conscious trade-off to move faster. Sometimes it was avoidance. Either way, humans could usually compensate.


CIO Playbook for Post-Quantum Security

While the scope of migration to post-quantum cryptography can be daunting, CIOs can follow several practical steps to make the project more manageable, said Sandy Carielli, vice president and principal analyst at Forrester. "There's a process here that's going to need to be addressed in order to get to where the organization needs to be," she said. "Discover, prioritize, remediate and add cryptographic agility." One of the biggest misconceptions she sees from CIOs is on what being ready for quantum-resistant security means. "Sometimes people have the misconception that you need a quantum computer for quantum security," Carielli said. "You don't need quantum computers. And, in fact, you're not going to. You're doing this to be protected." ... Designing for crypto agility is the final step in the process, and organizations should strive to create systems so that algorithm changes necessitate configuration changes, not re-architecting. "Good for crypto agility means that the next time an algorithm is broken, we are able to adapt to that by changing a configuration. We're able to adapt in a matter of weeks, rather than a matter of years," Carielli said. The regulatory impact should make quantum migration an easier sell than it would have been even a few years ago, as deadlines loom in the United States, Australia, EU and Asia countries. "Regardless of when a quantum computer is going to be able to break today's cryptography, we are being asked to migrate by the organizations and the countries that we want to do business with," Carielli said.


When your platform team can’t say yes: How away-teaming unlocks stuck roadmaps

Away teaming inverts the traditional model. Instead of platform engineers embedding with product teams to provide expertise, product engineers temporarily join platform teams to build required capabilities under platform guidance. ... Product teams have already secured funding for their initiatives. Away teaming redirects that investment from building a product-specific solution into creating a reusable platform capability. For platform teams, this expands effective capacity without headcount growth. Platform engineers provide design review, answer questions and conduct code review. ... Product engineers need to view away teaming as a growth opportunity, not a sacrifice. Frame it explicitly as platform engineering experience that builds broader systems thinking skills and deepens architectural understanding. ... Away teaming works best for capabilities in the middle ground: too product-specific for immediate platform prioritization, yet general enough that future products will benefit from reuse. Away teaming also has scale limits. A platform team might effectively support two concurrent away team engagements. Beyond that, guidance capacity becomes strained. ... Product engineers who complete away team assignments become platform advocates. They understand the architectural tradeoffs and can credibly explain platform limitations, reducing tension and frustration between teams.


Forget Predictions: True 2026 Cybersecurity Priorities From Leaders

Most organizations, large and small, are inundated with manual tasks, which makes many of our processes very expensive. This is compounded by economic forces that many organizations face today, which limits their ability to hire additional staff. For years, the industry has been working to solve these problems with SOAR, RPA Bots, or other programmatic solutions to do this bulk work. I think the use of AI extends the work we have already done in that space, but in a broader application. ... The promise of SOAR is centralized orchestration. The reality is months of costly, brittle integration work that breaks with every vendor update. We spend more time maintaining the automation pipeline than the pipeline saves us. We don’t have enough people who can build, train, and maintain sophisticated AI/ML models while understanding threat hunting. The technology requires a new, hyper-specialized skill set, defeating the goal of efficiency. The single most impactful shift for efficiency in 2026 will be the Process and People shift toward Radical Simplification and Security Accountability Diffusion. ... “The shift I’m pushing for is toward collaborative intelligence that actually tells us which threats matter for our specific environment. Context is king here, and I’m encouraged by the emergence of solutions that analyze signals across multiple organizations to provide internet-wide defense. But this only works if we’re all willing to put in what we want to get out of it, meaning reliably sharing intelligence with peers and industry groups, not just consuming it.


DCI launches digital identity interoperability standards for social protection

Authorities are increasingly leveraging digital identification systems to achieve this goal and ensure their social protection (SP) programs are inclusive. ... These open standards provide a trusted mechanism for social protection systems to authenticate individuals and request verified identity data, such as demographic attributes or authentication tokens, in a privacy-preserving way. The standards are not about building ID systems themselves or about integrating with health or education platforms, DCI emphasized. Rather, they’re focused squarely on enabling interoperability between ID and social protection systems. This includes supporting social registries, integrated beneficiary registries and other SP platforms “to connect meaningfully and securely with ID systems.” DCI said the release culminates months of research, peer review and collaboration by a standards committee comprising experts from 20 organizations. By establishing a common technical language, the initiative aims to strengthen digital public infrastructure and foster greater trust in the delivery of social protection programs. ... “Digital transformation of social protection is not an end in itself and it’s not only about cutting costs,” said ILO director Shahra Razavi. “It is about making sure everyone has access to benefits and services, particularly those most at risk of vulnerability and exclusion.”


Data Governance in the AI Era: Are We Solving the Wrong Problem?

The foundation of any effective AI governance model starts with visibility and control. Create a living list of sanctioned AI tools tied to enterprise accounts like personal accounts and shadow IT. Once you have that visibility, it’d be right to require all AI usage through company-issued credentials, ensuring every login is accountable and logged. Users authenticate through your identity provider, and audit trails capture usage patterns. When you can trace who accessed which tool and when, you can create records that support both compliance requirements and incident investigation. ... One of the biggest mistakes organizations make is treating all data the same way, imposing blanket bans that create friction without proportional security benefit. A more effective approach classifies data by sensitivity level and creates rules aligned with that classification. ... If your policy today looks like a wall of “no,” you’re probably protecting yourself from the wrong consequence. The real risk isn’t that AI will suddenly go rogue, it’s more likely that your people will use it without guidance, visibility, or control. Unmanaged adoption creates the very data leakage you’re trying to prevent. And with managed adoption, through clear policy and good governance, creates visibility, accountability, and the ability to detect and respond to actual incidents. Data professionals occupy a critical position in this conversation, they own the data architecture, the classification systems, and the audit trails that make AI governance possible. 

Daily Tech Digest - January 15, 2026


Quote for the day:

"You have to have your heart in the business and the business in your heart." -- An Wang


AI agents can talk — orchestration is what makes them work together

“Agent-to-agent communications is emerging as a really big deal,” G2’s chief innovation officer Tim Sanders told VentureBeat. “Because if you don't orchestrate it, you get misunderstandings, like people speaking foreign languages to each other. Those misunderstandings reduce the quality of actions and raise the specter of hallucinations, which could be security incidents or data leakage.” ... In another critical evolution in the agentic era, human evaluators will become designers, moving from human-in-the-loop to human-on-the-loop, according to Sanders. That is: They will begin designing agents to automate workflows. Agent builder platforms continue to innovate their no-code solutions, Sanders said, meaning nearly anyone can now stand up an agent using natural language. “This will democratize agentic AI, and the super skill will be the ability to express a goal, provide context and envision pitfalls, very similar to a good people manager today.” ... Organizations should begin “expeditious programs” to infuse agents across workflows, especially with highly repetitive work that poses bottlenecks. Likely at first, there will be a strong human-in-the-loop element to ensure quality and promote change management. “Serving as an evaluator will strengthen the understanding of how these systems work,” Sanders said, “and eventually enable all of us to operate upstream in agentic workflows instead of downstream.”


Integrating AI-Enhanced Microservices in SAFe 5.0 Framework

AI-driven microservices can be a game-changer for Lean Portfolio Management within SAFe. By optimizing decision analytics and enhancing value stream performance, AI simplifies, rather than complicates. I know what you’re thinking: AI tools can add complexity. One client put this to the test, and we found AI helped reduce the noise. It sliced through the data smog to identify hidden value streams and automate mundane tasks like financial forecasting and risk management. ... Integrating decentralized AI models into SAFe’s ARTs can significantly enhance their autonomy. During a high-stakes project, we shifted from a centralized to a decentralized model, which allowed ARTs to self-optimize and adapt to shifting priorities seamlessly. It was like giving ARTs a brain of their own. Decentralized AI models reduce the bottlenecks you'd typically encounter in centralized systems. Think of the ARTs as small startups within the larger enterprise ecosystem, each capable of making swift, informed decisions. ... This isn’t just a tech enthusiast's dream—it's an emerging reality. The maturity of AI technologies spells a future where enterprises aren’t just keeping up; they’re setting the pace. So, if there’s a single, actionable insight to glean from my journey, it’s this: enterprises need to actively pursue cross-industry collaborations, invest in AI-powered microservices, and hone their Agile professionals’ skill sets.


Incorporating Geopolitical Risk Into Your IT Strategy

IT organizations know how to plan for unexpected outages, but even the most rigorously designed strategy is vulnerable to the shifting winds of geopolitics. CIOs and technology leaders need to know how their organizations will respond to geopolitical disruptions, and scenario planning needs to be a priority. ... "The IT department can treat geopolitical disruption as an expected operational variable rather than an unforeseen catastrophe. Good and tested enterprise risk management frameworks, investment in government affairs partnerships and ongoing board engagement should start to manage and prepare for this," Dixon said. CIOs need to do scenario modeling around the risks facing their enterprise, and evaluate how IT is teaming with business units, security teams and the CISO on a cohesive tech strategy that builds security, including artificial intelligence security, in from the ground up, said Sean Joyce ... "You're as strong as your weakest link," Joyce said. "As geopolitical risk becomes more prominent, you're going to see tools like cyber being leveraged by countries, particularly those that don't have stronger military or other capabilities. For some, it may be the only tool they can leverage." Physical infrastructure, geography and power supplies are also now areas of risk CIOs need to consider, and infrastructure strategy must align with sustainability, energy realities and geopolitical stability. 


Six Architecture Challenges for Startups

The risk is not that the first version is imperfect; that is inevitable. The risk is that the team keeps layering new functionality on top of an accidental architecture. At some point, the cost of change becomes so high that every small modification feels dangerous. The architectural challenge is to intentionally decide where to accept debt and where to invest in structure. Startups need a minimal set of principles – for example, clear domain boundaries, basic API hygiene, and a simple deployment model – that allow speed without locking the product into a dead end. ... If the product team is still validating pricing models, redefining the customer journey, or experimenting with different verticals, any rigid decomposition can turn into friction. Yet avoiding boundaries altogether leads to a “big ball of mud” that is equally hard to evolve. A practical approach is to use provisional boundaries based on current value streams – onboarding, transaction processing, analytics, etc. – and treat them as hypotheses. The challenge is not to find the perfect structure from day one, but to keep those boundaries explicit and adjustable as the business model evolves. ... Startups must make conscious decisions about where they are comfortable being tightly coupled to a provider and where they need portability. That requires viewing cloud services through a business lens: What is strategic IP, what is replaceable, and what is pure commodity? Aligning these categories with architectural choices is a non-trivial design challenge, not just a procurement decision. 


Platform-as-a-Product: Declarative Infrastructure for Developer Velocity

Without centralized guardrails, teams often compensate by over-allocating resources "to be safe", leading to inconsistent environments and unnecessary cloud spend that is only discovered after deployment. ... What is missing is a developer-friendly abstraction that brings these related concerns together. Developers need a way to express intent (not only what infrastructure is required, but also how the application should be built, deployed, configured across environments, secured, and sized) without having to implement the mechanics of each underlying system. From a platform engineering perspective, this abstraction represents the core of an internal developer platform and can be implemented as a lightweight Python-based platform framework. ... The platform comprises several interconnected components. GitLab pipelines coordinate everything, pulling code from repositories, building and unit testing applications (with tests written by developers), checking security, creating cloud infrastructure with Terraform/IaC, and deploying to Kubernetes clusters with Puppet configuration management. The configuration YAML file controls all of this, telling each component what to do. The architecture clearly separates concerns: the CI pipeline handles code building, testing, and vulnerability scanning. CD pipeline handles deployment: creating cloud resources, updating Kubernetes, and configuring environments. 


(Re)introducing Adaptive Business Continuity

Adaptive BC is designed to provide a framework that delivers better outcomes when organizations deal with losses. The result may be a reduction in documentation (something I greatly favor) but that is not a stated goal. ... My experience over the years has led me to conclude that trying to define priorities for the resumption of services is wasted effort. Many activities can take place in parallel, and priorities will change when disasters occur. A perfect example is the governmental lockdowns and health authority mandates that followed the emergence of COVID. The result is that demand for products and services changed drastically, upending previous priorities. Priorities may be defined following adaptive principles, but it is not at all a stated component of the Adaptive framework. ... For a number of reasons, I would like to see the word “plan” used a lot less within our profession. Seeing the word “strategy” in its place would be a step in the right direction. Strategy improvement is not, however, a key outcome of Adaptive BC efforts. There is some benefit to having clearly defined recovery strategies, but strategies only provide benefit to competent and empowered teams armed with the resources they need to carry out the mission. For this reason, I always emphasize the importance of focusing efforts on capabilities and consider plans and strategies as little more than supporting tools for any business continuity program. The improvement of strategies and/or plans is simply not an expected outcome of Adaptive BC work.


Exactly What To Automate With AI In 2026 For Faster Business Growth

Most founders automate the wrong things. They start with the flashy stuff, the complicated tools and fancy dashboards, while ignoring the repetitive tasks quietly draining their hours. But you need faster, cleaner growth by removing friction from the activities that actually grow your business. ... You shouldn't embark on a day's worth of admin tasks every time a new client says yes. It will only slow you down. Make it easy for them to pay, get a receipt, complete an onboarding form, and submit the required information. On your end, have the Google Drive folders, follow-up emails, and team briefings set up without you lifting a finger. Question everything you currently do manually. There is no reason it couldn't be an AI agent handling the sequence. All the tools you pay for already have integrations with each other; You're just not using them. The goal is that you could sign client after client because onboarding takes minutes, not hours. ... AI-generated content is awful when you use it wrong. But that doesn't mean you shouldn't involve AI in your content production process. Content still matters in marketing, whether long-form articles, videos, or social media visuals. You need to be part of the conversation, but only with relevant, authentic material. You cannot outproduce everyone manually, so use automations and retain your human genius for the finishing touches. ... The more your life admin runs on autopilot, the more you free up time and energy for your business. 


What is AI fuzzing? And what tools, threats and challenges generative AI brings

The way traditional fuzzing works is you generate a lot of different inputs to an application in an attempt to crash it. Since every application accepts inputs in different ways, that requires a lot of manual setups. Security testers would then run these tests against their companies’ software and systems to see where they might fail. ... Today, generative artificial intelligence has the potential to automate this previously manual process, coming up with more intelligent tests, and allowing more companies to do more testing of their systems. ... But there’s a third angle involved here. What if, instead of trying to break traditional software, the target was an AI-powered system? This creates unique challenges because AI chatbots are not predictable and can respond differently to the same input at different times. ... AI fuzzing can also help speed up the discovery of vulnerabilities, Roy says. “Traditionally, testing was always a function of how many days and weeks you had to test the system, and how many testers you could throw at the testing,” he says. “With AI, we can expand the scale of the testing.” ... Another use of AI in fuzzing is that it takes more than a set of test cases to fully test an application — you also need a mechanism, a harness, to feed the test cases into the app, and in all the nooks and crannies of the application. “If the fuzzing harness does not have good coverage, then you may not uncover vulnerabilities through your fuzzing,” says Dane Sherrets, staff innovations architect for emerging technologies at HackerOne


CISOs flag gaps in third-party risk management

CISOs rank third-party cyber risk among their highest-impact threats. Vendor relationships touch nearly every core business function, from cloud infrastructure and software development to data processing and AI services. Each added dependency expands the attack surface and increases the number of organizations involved in protecting sensitive systems and data. ... Only a small portion of organizations report visibility across third-, fourth-, and nth-party relationships. Most operate with partial insight limited to direct vendors or a narrow segment of the extended supply chain. CISOs say limited visibility complicates incident response, risk prioritization, and compliance planning. When a breach emerges several layers removed from a known vendor, security teams may struggle to understand exposure, timelines, and downstream impact. ... CISOs report rising regulatory scrutiny tied to third-party cyber risk. Regulatory frameworks place greater expectations on organizations to demonstrate oversight across vendor ecosystems, including indirect relationships. Only a minority of organizations feel ready to meet upcoming requirements without major changes. Most report progress underway, with further work needed to align processes, tooling, and internal coordination. Third-party risk management involves legal, procurement, compliance, and executive leadership alongside security teams. ... At the same time, AI adoption accelerates within vendor risk management itself. 


Anti-fragility – what is it and why should it be the goal for your organisation?

That ability to thrive in the face of disruption must become the basis for improved resilience. Modern organisations shouldn’t strive for survival, but for continual improvement. In the cyber sphere, that is crucial. Threat actors are constantly changing tack, targeting new CVEs, and executing increasingly complicated supply chain attacks. Resilience must therefore move in tandem as an ongoing process of learning and adapting. That is the crux of anti-fragility. It defines systems that thrive and improve from stress, volatility, disorder and shocks, rather than just resisting them. If a security model is only designed to recover, it remains just as vulnerable as before. But an anti-fragile approach actively benefits from each attack, identifying weaknesses, addressing them, and adapting as needed. ... Increasingly, organisations are recognising the value in anti-fragility as a strategy and more will adopt it next year. However, getting there means going beyond regulatory compliance. Compliance lays the foundations from which successful cybersecurity can be built, yet many currently see it as the finished structure. There are several problems with that. Security legislation frequently lags behind the threat landscape, and so the gap between a new threat emerging and a new law coming in to address it can stretch over the course of years. Organisations must therefore understand that compliance doesn’t equal protection.