Daily Tech Digest - January 21, 2026


Quote for the day:

"People ask the difference between a leader and a boss. The leader works in the open, and the boss in covert." -- Theodore Roosevelt



Why the future of security starts with who, not where

Traditional security assumed one thing: “If someone is inside the network, they can be trusted.” That assumption worked when offices were closed environments and systems lived behind a single controlled gateway. But as Microsoft highlights in its Digital Defense Report, attackers have moved almost entirely toward identity-based attacks because stealing credentials offers far more access than exploiting firewalls. In other words, attackers stopped trying to break in. They simply started logging in. ... Zero trust isn’t about paranoia. It’s about verification. Never trust, always verify only works if identity sits at the center of every access decision. That’s why CISA’s zero trust maturity model outlines identity as the foundation on which all other zero trust pillars rest — including network segmentation, data security, device posture and automation. ... When identity becomes the perimeter, it can’t be an afterthought. It needs to be treated like core infrastructure. ... Organizations that invest in strong identity foundations won’t just improve security — they’ll improve operations, compliance, resilience and trust. Because when identity is solid, everything else becomes clearer: who can access what, who is responsible for what and where risk actually lives. The companies that struggle will be the ones trying to secure a world that no longer exists — a perimeter that disappeared years ago.


Designing Consent Under India's DPDP Act: Why UX Is Now A Legal Compliance

The request for consent must be either accompanied by or preceded by a notice. The notice must specifically contain three things: personal data and purpose for which it is being collected; the manner in which he or she may withdraw consent or make grievance; and the manner in which the complaint may be made to the board. ... “Free” consent also requires interfaces to avoid deceptive nudges or coercive UI design. Consider a consent banner implemented with a large “Accept All” button as the primary call-to-action button while the “Reject” option is kept hidden behind a secondary link that opens multiple additional screens. This creates an asymmetric interaction cost where acceptance requires a single click and refusal demands several steps. If consent is obtained through such interface, it cannot be regarded as voluntary or valid. ... A defensible consent record must capture the full interaction such as which notice version was shown, what purposes were disclosed, language of the notice and the action of the user (click, toggle, checkbox). The standard operational logs might be disposed after 30 or 90 days but the consent logs cannot follow the same cycle. Section 6(10) implicitly states that consent records must be retained as long as the data is being processed for the purposes shown in the notice. If the personal data was collected in 2024 and is still being processed in 2028, the Fiduciary must produce the 2024 consent logs as evidence.


The AI Skills Gap Is Not What Companies Think It Is

Employers often say they cannot find enough AI engineers or people with deep model expertise to keep pace with AI adoption. We can see that in job descriptions. Many blend responsibilities across model development, data engineering, analytics, and production deployment into a single role. These positions are meant to accelerate progress by reducing handoffs and simplifying ownership. And in an ideal world, the workforce would be ready for this. ... So when companies say they are struggling to fill the AI skills gap, what they are often missing is not raw technical ability. They are missing people who can operate inside imperfect environments and still move AI work forward. Most organizations do not need more model builders. ... For professionals trying to position themselves, the signal is similar. Career advantage increasingly comes from showing end-to-end exposure, not mastery of every AI tool. Experience with data pipelines, deployment constraints, and being able to monitor systems matter. Being good at stakeholder communication remains an important skill. The AI skills gap is not a shortage of talent. It is a shortage of alignment between what companies need and what they are actually hiring for. It’s also an opportunity for companies to understand what it really means, and finally close the gap. Professionals can also capitalize on this opportunity by demonstrating end-to-end, applied AI experience.


DevOps Didn’t Fail — We Just Finally Gave it the Tools it Deserved

Ask an Ops person what DevOps success looks like, and you’ll hear something very close to what Charity is advocating: Developers who care deeply about reliability, performance, and behavior in production. Ask security teams and you’ll get a different answer. For them, success is when everyone shares responsibility for security, when “shift left” actually shifts something besides PowerPoint slides. Ask developers, and many will tell you DevOps succeeded when it removed friction. When it let them automate the non-coding work so they could, you know, actually write code. Platform engineers will talk about internal developer platforms, golden paths, and guardrails that let teams move faster without blowing themselves up. SREs, data scientists, and release engineers all bring their own definitions to the table. That’s not a bug in DevOps. That’s the thing. DevOps has always been slippery. It resists clean definitions. It refuses to sit still long enough for a standards body to nail it down. At its core, DevOps was never about a single outcome. It was about breaking down silos, increasing communication, and getting more people aligned around delivering value. Success, in that sense, was always going to be plural, not singular. Charity is absolutely right about one thing that sits at the heart of her argument: Feedback loops matter. If developers don’t see what happens to their code in the wild, they can’t get better at building resilient systems. 


The sovereign algorithm – India’s DPDP act and the trilemma of innovation, rights, and sovereignty

At its core, the DPDP Act functions as a sophisticated product of governance engineering. Its architecture is a deliberate departure from punitive, post facto regulation towards a proactive, principles based model designed to shape behavior and technological design from the ground up. Foundational principles such as purpose limitation, data minimization, and storage restriction are embedded as mandatory design constraints, compelling a fundamental rethink of how digital services are conceived and built. ... The true test of this legislative architecture will be its performance in the real world, measured across a matrix of tangible and intangible metrics that will determine its ultimate success or failure. The initial eighteen month grace period for most rules constitutes a critical nationwide integration phase, a live stress test of the framework’s viability and the ecosystem’s adaptability. ... Geopolitically, the framework positions India as a normative leader for the developing world. It articulates a distinct third path between the United States’ predominantly market oriented approach and China’s model of state controlled cyber sovereignty. India’s alternative, which embeds individual rights within a democratic structure while reserving state authority for defined public interests, presents a compelling model for nations across the Global South navigating their own digital transitions.


Everyone Knows How to Model. So Why Doesn’t Anything Get Modeled?

One of the main reasons modeling feels difficult is not lack of competence, but lack of shared direction. There is no common understanding of what should be modeled, how it should be modeled, or for what purpose. In other words, there is no shared content framework or clear work plan. When it is missing, everyone defaults to their own perspective and experience. ... From the outside, it looks like architecture work is happening. In reality, there is discussion, theorizing, and a growing set of scattered diagrams, but little that forms a coherent, usable whole. At that point, modeling starts to feel heavy—not because it is technically difficult, but because the work lacks direction, a shared way of describing things, and clear boundaries. ... To be fair, tools do matter. A bad or poorly introduced tool can make modeling unnecessarily painful. An overly heavy tool kills motivation; one that is too lightweight does not support managing complexity. And if the tool rollout was left half-done, it is no surprise the work feels clumsy. At the same time, a good tool only enables better modeling—it does not automatically create it. The right tool can lower the threshold for producing and maintaining content, make relationships easier to see, and support reuse. ... Most architecture initiatives don’t fail because modeling is hard. They fail because no one has clearly decided what the modeling is for. ... These are not technical modeling problems. They are leadership and operating-model problems. 


ChatGPT Health Raises Big Security, Safety Concerns

ChatGPT Health's announcement touches on how conversations and files in ChatGPT as a whole are "encrypted by default at rest and in transit" and that there are some data controls such as multifactor authentication, but the specifics on how exactly health data will be protected on a technical and regulatory level was not clear. However, the announcement specifies that OpenAI partners with network health data firm b.well to enable access to medical records. ... While many security tentpoles remain in place, healthcare data must be held to the highest possible standard. It does not appear that ChatGPT Health conversations are end-to-end encrypted. Regulatory consumer protections are also unclear. Dark Reading asked OpenAI whether ChatGPT Health had to adhere to any HIPAA or regulatory protections for the consumer beyond OpenAI's own policies, and the spokesperson mentioned the coinciding announcement of OpenAI for Healthcare, which is OpenAI's product for healthcare organizations which do need to meet HIPAA requirements. ... even with privacy protections and promises, data breaches will happen and companies will generally comply with legal processes such as subpoenas and warrants as they come up. "If you give your data to any third party, you are inevitably giving up some control over it and people should be extremely cautious about doing that when it's their personal health information," she says.


From static workflows to intelligent automation: Architecting the self-driving enterprise

We often assume fragility only applies to bad code, but it also applies to our dependencies. Even the vanguard of the industry isn’t immune. In September 2024, OpenAI’s official newsroom account on X (formerly Twitter) was hijacked by scammers promoting a crypto token. Think about the irony: The company building the most sophisticated intelligence in human history was momentarily compromised not by a failure of their neural networks, but by the fragility of a third-party platform. This is the fragility tax in action. When you build your enterprise on deterministic connections to external platforms you don’t control, you inherit their vulnerabilities. ... Whenever we present this self-driving enterprise concept to clients, the immediate reaction is “You want an LLM to talk to our customers?” This is a valid fear. But the answer isn’t to ban AI; it is to architect confidence-based routing. We don’t hand over the keys blindly. We build governance directly into the code. In this pattern, the AI assesses its own confidence level before acting. This brings us back to the importance of verification. Why do we need humans in the loop? Because trusted endpoints don’t always stay trusted. Revisiting the security incident I mentioned earlier: If you had a fully autonomous sentient loop that automatically acted upon every post from a verified partner account, your enterprise would be at risk. A deterministic bot says: Signal comes from a trusted source -> execute. 


AI is rewriting the sustainability playbook

At first, greenops was mostly finops with a greener badge. Reduce waste, right-size instances, shut down idle resources, clean up zombie storage, and optimize data transfer. Those actions absolutely help, and many teams delivered real improvements by making energy and emissions a visible part of engineering decisions. ... Greenops was designed for incremental efficiency in a world where optimization could keep pace with growth. AI breaks that assumption. You can right-size your cloud instances all day long, but if your AI footprint grows by an order of magnitude, efficiency gains get swallowed by volume. It’s the classic rebound effect: When something (AI) becomes easier and more valuable, we do more of it, and total consumption climbs. ... Enterprises are simultaneously declaring sustainability leadership while budgeting for dramatically more compute, storage, networking, and always-on AI services. They tell stakeholders, “We’re reducing our footprint,” while telling internal teams, “Instrument everything, vectorize everything, add copilots everywhere, train custom models, and don’t fall behind.” This is hypocrisy and a governance failure. ... Greenops isn’t dead, but it is being stress-tested by a wave of AI demand that was not part of its original playbook. Optimization alone won’t save you if your consumption curve is vertical. Rather than treat greenness as just a brand attribute, enterprises that succeed will recognize greenops as an engineering and governance discipline, especially for AI


Your AI strategy is just another form of technical debt

Modern software development has become riddled with indeterminable processes and long development chains. AI should be able to fix this problem, but it’s not actually doing so. Instead, chances are your current AI strategy is saddling your organisation with even more technical debt. The problem is fairly straightforward. As software development matures, longer and longer chains are being created from when a piece of software is envisioned until it’s delivered. Some of this is due to poor management practices, and some of it is unavoidable as programs become more complex. ... These tools can’t talk to each other, though; after all, they have just one purpose, and talking isn’t one of them. The results of all this, from the perspective of maintaining a coherent value chain, are pretty grim. Results are no longer predictable. Worse yet, they are not testable or reproducible. It’s just a set of random work. Coherence is missing, and lots of ends are left dangling. ... If this wasn’t bad enough, using all these different, single-purpose tools adds another problem, namely that you’re fragmenting all your data. Because these tools don’t talk to each other, you’re putting all the things your organisation knows into near-impenetrable silos. This further weakens your value chain as your workers, human and especially AI, need that data to function. ... Bolting AI onto existing systems won’t work. AIs aren’t human, and you can’t replace them one for one, or even five for one. It doesn’t work. 

Daily Tech Digest - January 20, 2026


Quote for the day:

"The level of morale is a good barometer of how each of your people is experiencing your leadership." -- Danny Cox



The culture you can’t see is running your security operations

Non-observable culture is everything happening inside people’s heads. Their beliefs about cyber risk. Their attitudes toward security. Their values and priorities when security conflicts with convenience or speed. This is where the real decisions get made. You can’t see someone’s belief that “we’re too small to be targeted” or “security is IT’s job, not mine.” You can’t measure their assumption that compliance equals security. You can’t audit their gut feeling that reporting a mistake will hurt their career. But these invisible forces shape every security decision your people make. Non-observable culture includes beliefs about the likelihood and severity of threats. It includes how people weigh security against productivity. It includes their trust in leadership and their willingness to admit mistakes. It includes all the cognitive biases that distort risk perception. ... Implicit culture is the stuff nobody talks about because nobody even realizes it’s there. The unspoken assumptions. The invisible norms. The “way things are done here” that everyone knows but nobody questions. This is the most powerful layer because it operates below conscious awareness. People don’t choose to follow implicit norms. They do. Automatically. Without thinking. Implicit culture includes unspoken beliefs like “security slows us down” or “leadership doesn’t really care about this.” It contains hidden power dynamics that determine who can challenge security decisions and who can’t.


The top 6 project management mistakes — and what to do instead

Project managers are trained to solve project problems. Scope creep. Missed deadlines. Resource bottlenecks. ... Start by helping your teams understand the business context behind the work. What problem are we trying to solve? Why does this project matter to the organization? What outcome are we aiming for? Your teams can’t answer those questions unless you bring them into the strategy conversation. When they understand the business goals, not just the project goals, they can start making decisions differently. Their conversations change to ensure everyone knows why their work matters. ... Right from the start of the project, you need to define not just the business goal but how you’ll measure it was successful in business terms. Did the project reduce cost, increase revenue, improve the customer experience? That’s what you and your peers care about, but often that’s not the focus you ask the project people to drive toward. ... People don’t resist because they’re lazy or difficult. They resist because they don’t understand why it’s happening or what it means for them. And no amount of process will fix that. With an accelerated delivery plan designed to drive business value, your project teams can now turn their attention to bringing people with them through the change process. ... To keep people engaged in the project and help it keep accelerating toward business goals, you need purpose-driven communication designed to drive actions and decisions. 


AI has static identity verification in its crosshairs. Now what?

Identity models based on “joiner–mover–leaver” workflows and static permission assignments cannot keep pace with the fluid and temporary nature of AI agents. These systems assume identities are created carefully, permissions are assigned deliberately, and changes rarely happen. AI changes all of that. An agent can be created, perform sensitive tasks, and terminate within seconds. If your verification model only checks identity at login, you’re leaving the entire session vulnerable. ... Securing AI-driven enterprises requires a shift similar to what we saw in the move from traditional firewalls to zero-trust architectures. We didn’t eliminate networks; we elevated policy and verification to operate continuously at runtime. Identity verification for AI must follow the same path. This means building a system that can: Assign verifiable identities to every human and machine actor; Evaluate permissions dynamically based on context and intent; Enforce least privilege at high velocity; Verify actions, not just entry points; ... This is why frameworks like SPIFFE and modern workload identity systems are receiving so much attention. They treat identity as a short-lived, cryptographically verifiable construct that can be created, used, and retired in seconds, exactly the model AI agents require. Human activity is becoming the minority as autonomous systems that can act faster than we can are being spun up and terminated before governance can keep up. That’s why identity verification must shift from a checkpoint to a real-time trust engine that evaluates every action from every actor, human or AI.


AWS European cloud service launch raises questions over sovereignty

AWS established a new legal entity to operate the European Sovereign Cloud under a separate governance and operational model. The new company is incorporated in Germany and run exclusively by EU residents, AWS said. ... “This is the elephant in the room,” said Rene Buest, senior director analyst at Gartner. There are two main concerns regarding the operation of AWS’s European Sovereign Cloud for businesses in Europe. The first relates to the 2018 US Cloud Act, which could require AWS to disclose customer data stored in Europe to the United States, if requested by US authorities. The second involves the possibility of US government sanctions: If a business that uses AWS services is subject to such sanctions, AWS may be compelled to block that company’s access to its cloud services, even if its data and operations are based in Europe. ... It’s an open question at this stage, said Dario Maisto, senior analyst at Forrester. “Cases will have to be tested in court before we can have a definite answer,” he said. “The legal ownership does matter, and this is one of the points that may not be addressed by the current setup of the AWS sovereign cloud.” AWS’s European Sovereign Cloud represents one of several ways that European business can approach the challenge of digital sovereignty. Gartner identifies a spectrum that ranges from global hyperscaler public cloud services through to regional cloud services that are based on non-hyperscaler technology. 


Why peripheral automation is the missing link in end-to-end digital transformation?

While organisations have successfully modernized their digital cores, the “last mile” of business operations often remains fragmented, manual, and surprisingly analogue. This gap is why Peripheral Automation is emerging not merely as a tactical correction but as the critical missing link in achieving true, end-to-end digital transformation. ... Peripheral Automation offers a strategic resolution to this paradox. It’s an architectural philosophy that advocates “differential innovation.” Rather than disrupting stable cores to accommodate fleeting business needs, organisations build agile, tailored applications and workflows that sit on top of the core systems. This approach treats the enterprise as a layered ecosystem. The core remains the single source of truth, but the periphery becomes the “system of engagement”. By leveraging modern low-code platforms and composable architecture, leaders can deploy lightweight, purpose-built automation tools that address specific friction points without altering the underlying infrastructure. ... Peripheral automation reduces process latency, manual effort, and rework. By addressing specific pain points rather than attempting broad, multi-year system redesigns, companies unlock measurable efficiency in weeks. This precision improves throughput, reduces cycle times, and frees teams to focus on high-value work.


How does agentic ops transform IT troubleshooting?

AI Canvas introduces a fundamentally different user experience for network troubleshooting. Rather than navigating through multiple dashboards and CLI interfaces, engineers interact with a dynamic canvas that populates with relevant widgets as troubleshooting progresses. You could say that the ‘canvas’ part of the name AI Canvas is the most important part of it. That is, AI Canvas is actually a blank canvas every time you start troubleshooting. It fills the canvas with boxes and on the fly widgets, among other things, during the troubleshooting. Sampath confirms this: “When you ask a question, it’s using and picking the right types of tools that it can go and execute on a specific task and calls agents to be able to effectively take a task to completion and returns a response back.” The system can spin up monitoring agents that continuously provide updated information, creating a living troubleshooting environment rather than static reports. ... AI Canvas doesn’t exist in isolation. It builds on Cisco’s existing automation foundation. The company previously launched Workflows, a no-code network automation engine, and AI assistants with specific skills for network operations. “All of the automations that are already baked into the workflows, the skills that were built inside of the assistants, now manifest themselves inside of the canvas,” Sampath details. This creates a continuum from deterministic workflows to semi-autonomous assistants to fully autonomous agentic operations.


UK government launches industry 'ambassadors' scheme to champion software security improvements

"By acting as ambassadors, signatories are committing to a process of transparency, development and continuous improvement. The implementation of this code of practice will take time and, in doing so, may bring to light issues that need to be addressed," DSIT said in a statement confirming the announcement. "Signatories and policymakers will learn from these issues as well as the successes and challenges for each organization and, where appropriate, will share information to help develop and strengthen this government policy." ... The Software Security Code of Practice was unveiled by the NCSC in May last year, setting out a series of voluntary principles defining what good software security looks like across the entire software lifecycle. Aimed at technology providers and organizations that develop, sell, or procure software, the code offers best practices for secure design and development, build-environment security, and secure deployment and maintenance. The code also emphasizes the importance of transparent communication with customers on potential security risks and vulnerabilities. ... “The code moves software security beyond narrow compliance and elevates it to a board-level resilience priority. As supply chain attacks continue to grow in scale and impact, a shared baseline is essential and through our global community and expertise, ISC2 is committed to helping professionals build the skills needed to put secure-by-design principles into practice.”


Privacy teams feel the strain as AI, breaches, and budgets collide

Where boards prioritize privacy, AI use appears more frequently and follows defined direction. Larger enterprises, particularly those with broader risk and compliance functions, also report higher uptake. In smaller organizations, or those where privacy has limited visibility at the leadership level, AI adoption remains tentative. Teams that apply privacy principles throughout system development report higher use of AI for privacy tasks. In these environments, AI supports ongoing work rather than introducing new approaches. ... Respondents working in organizations where privacy has active board backing report more consistent use of privacy by design. Budget stability shows a similar pattern, with better-funded teams reporting stronger integration of privacy into design and engineering work. The study also shows that privacy by design on its own does not stop breaches. Organizations that experienced breaches report similar levels of design practice as those that did not. The data places privacy by design mainly in a governance and compliance role, with limited connection to incident prevention. ... Governance shapes how teams view that risk. Professionals in organizations where privacy lacks board priority report higher expectations of a breach in the coming year. Gaps between privacy strategy and broader business goals also appear alongside higher breach expectations, suggesting that structural alignment influences outlook as much as technical controls. Confidence remains common, even among organizations that have experienced breaches.


Cyber Insights 2026: Information Sharing

The sheer volume of cyber threat intelligence being generated today is overwhelming. “Information sharing channels often help condense inputs and highlight genuine signals amid industry noise,” says Caitlin Condon, VP of security research at VulnCheck. “The very nature of cyber threat intelligence demands validation, context, and comparison. Information sharing allows cybersecurity professionals to more rigorously assess rising threats, identify new trends and deviations, and develop technically comprehensive guidance.” ... “The importance of the Cybersecurity Information Sharing Act of 2015 for U.S. national security cannot be overstated,” says Crystal Morin, cybersecurity strategist at Sysdig. “Without legal protections, many legal departments would advise security teams to pull back from sharing threat intelligence, resulting in slower, more cautious processes. ...” CISOs have developed their own closed communities where they can discuss current incidents with other CISOs. This is done via channels such as Slack, WhatsApp and Signal. Security of the channels is a concern, but who better than multiple CISOs to monitor and control security? ... “Much of today’s threat intelligence remains reactive, driven by short-lived IoCs that do little to help agencies anticipate or disrupt cyberattacks,” comments BeyondTrust’s Greene. “We need to modernize our information-sharing framework to emphasize behavior-based analytics enriched with identity-centric context,” he continues.


Edge AI: The future of AI inference is smarter local compute

The bump in edge AI goes hand in hand with a broader shift in focus from AI training, the act of preparing machine learning (ML) models with the right data, to inference, the practice of actively using models to apply knowledge or make predictions in production. “Advancements in powerful, energy-efficient AI processors and the proliferation of IoT (internet of things) devices are also fueling this trend, enabling complex AI models to run directly on edge devices,” says Sumeet Agrawal ... “The primary driver behind the edge AI boom is the critical need for real-time data processing,” says David. The ability to analyze data on the edge, rather than using centralized cloud-based AI workloads, helps direct immediate decisions at the source. Others agree. “Interest in edge AI is experiencing massive growth,” says Informatica’s Agrawal. For him, reduced latency is a key factor, especially in industrial or automotive settings where split-second decisions are critical. There is also the desire to feed ML models personal or proprietary context without sending such data to the cloud. “Privacy is one powerful driver,” says Johann Schleier-Smith ... A smaller footprint for local AI is helpful for edge devices, where resources like processing capacity and bandwidth are constrained. As such, techniques to optimize SLMs will be a key area to aid AI on the edge. One strategy is quantization, a model compression technique that reduces model size and processing requirements. 

Daily Tech Digest - January 19, 2026


Quote for the day:

"Stop Judging people and start understanding people everyone's got a story" -- @PilotSpeaker



Stop calling it 'The AI bubble': It's actually multiple bubbles, each with a different expiration date

The AI ecosystem is actually three distinct layers, each with different economics, defensibility and risk profiles. Understanding these layers is critical, because they won't all pop at once. ... The most vulnerable segment isn't building AI — it's repackaging it. These are the companies that take OpenAI's API, add a slick interface and some prompt engineering, then charge $49/month for what amounts to a glorified ChatGPT wrapper. Some have achieved rapid initial success, like Jasper.ai, which reached approximately $42 million in annual recurring revenue (ARR) in its first year by wrapping GPT models in a user-friendly interface for marketers. But the cracks are already showing. ... Economic researcher Richard Bernstein points to OpenAI as an example of the bubble dynamic, noting that the company has made around $1 trillion in AI deals, including a $500 billion data center buildout project, despite being set to generate only $13 billion in revenue. The divergence between investment and plausible earnings "certainly looks bubbly," Bernstein notes. ... But infrastructure has a critical characteristic: It retains value regardless of which specific applications succeed. The fiber optic cables laid during the dot-com bubble weren’t wasted — they enabled YouTube, Netflix and cloud computing. Twenty-five years ago, the original dot-com bubble burst after debt financing built out fiber-optic cables for a future that had not yet arrived, but that future eventually did arrive, and the infrastructure was there waiting.


Modernizing Network Defense: From Firewalls to Microsegmentation

For many years, network security has been based on the concept of a perimeter defense, likened to a fortified boundary. The network perimeter functioned as a protective barrier, with a firewall serving as the main point of access control. Individuals and devices within this secured perimeter were considered trustworthy, while those outside were viewed as potential threats. The "perimeter-centric" approach was highly effective when data, applications, and employees were all located within the physical boundaries of corporate headquarters. In the current environment, however, this model is considered not only obsolete but also poses significant risks. ... Microsegmentation significantly mitigates the impact of cyberattacks by transitioning from traditional perimeter-based security to detailed, policy-driven isolation at the level of individual workloads, applications, or containers. By establishing secure enclaves for each asset, it ensures that if a device is compromised, attackers are unable to traverse laterally to other systems. ... Microsegmentation solutions offer detailed insights into application dependencies and inter-server traffic flows, uncovering long-standing technical debt such as unplanned connections, outdated protocols, and potentially risky activities that may not be visible to perimeter-based defenses. ... One significant factor deterring organizations from implementing microsegmentation is the concern regarding increased complexity. 


Human-in-the-loop has hit the wall. It’s time for AI to oversee AI

This is not a hypothetical future problem. Human-centric oversight is already failing in production. When automated systems malfunction — flash crashes in financial markets, runaway digital advertising spend, automated account lockouts or viral content — failure cascades before humans even realize something went wrong. In many cases, humans were “in the loop,” but the loop was too slow, too fragmented or too late. The uncomfortable reality is that human review does not stop machine-speed failures. At best, it explains them after the damage is done. Agentic systems raise the stakes dramatically. Visualizing a multistep agent workflow with tens or hundreds of nodes often results in dense, miles-long action traces that humans cannot realistically interpret. As a result, manually identifying risks, behavior drift or unintended consequences becomes functionally impossible. ... Delegating monitoring tasks to AI does not eliminate human accountability. It redistributes it. This is where trust often breaks down. Critics worry that AI governing AI is like trusting the police to govern themselves. That analogy only holds if oversight is self-referential and opaque. The model that works is layered, with a clear separation of powers. ... Humans shift from reviewing outputs to designing systems. They focus on setting operating standards and policies, defining objectives and constraints, designing escalation paths and failure modes, and owning outcomes when systems fail.


Building leaders in the age of AI

The leaders who end up thriving in the AI era will be those who blend human depth with digital fluency. They will use AI to think with them, not for them. And they will treat this AI moment not as a threat to their leadership but as an opportunity to focus on those elements of their portfolios that only humans can excel at. ... Leaders will need to give teams a set of guardrails (clear values and decision rights) and establish new definitions of quality while fostering a sense of trust and collaboration as new challenges emerge and business conditions evolve. ... Aspiration, judgment, and creativity are “only human” leadership traits—and the characteristics that can provide an irreplaceable competitive edge, especially when amplified using AI. It’s therefore incumbent upon organizations to actively identify and develop the individuals who demonstrate critical intrinsics like resilience, eagerness to learn from mistakes, and the ability to work in teams that will increasingly include both humans and AI agents. ... Organizations must actively cultivate core leadership qualities such as wisdom, empathy, and trust—and they must give the development of these attributes the same attention they do to the development of new IT systems or operating models. That will mean providing time for leaders to do the inner work required to lead others effectively—that is, reflecting, sharing insights with other C-suite leaders, and otherwise considering what success will mean for themselves and the organization.


The Rising Phoenix of Software Engineering

Software is undergoing a tectonic transformation. Modern applications are no longer hand-crafted from scratch. They are assembled from third-party components, APIs, open-source packages, machine-learning models, and now AI-generated snippets. Artificial intelligence, low-code tools, Open-Source Software (OSS), reusable libraries have made the act of writing new code less central to building software than ever before. ... In this new era, the primary challenge is not about builder software faster, cheaper, or more feature-rich. It is how to engineer software safely and predictably in a hostile ecosystem. ... Software engineering, as a discipline, must rise again — not as a metaphor for resilience, but as a mandate for survival. ... The future does not eliminate developers or coders. Assembling, customizing, and scripting third-party components will remain critical. But the accountability layer must shift upward, to professionals trained to reason about system safety, dependencies, and security by design. In other words, software engineers must reemerge as true engineers responsible for understanding not only how their code works, but how and where it runs… and most critically how to secure it. ... To engineer software responsibly, practitioners must model threats, evaluate anti-tamper capabilities, and verify that each dependency meets a baseline of assurance. These tasks were historically reserved for penetration testers or quality assurance (QA) teams. 


The concerning cyber-physical security disconnect

The background of many physical security professionals is in military and law enforcement, which change much slower, but are known for extensive training. The nature of the threats they need to defend against is evolving at a slower pace, and destructive, kinetic threats remain a primary concern. ... The focus of cybersecurity is much more on the insides of an organization. Detection is supposed to catch attackers lurking on compromised devices. Response activities have to consider the entire infrastructure rather than individual hosts. Security measures are spread out across the network, taking a defense-in-depth approach. Physical security is much more outward looking, trying to prevent threats from entering. Detection systems exist within premises, but focus on the outer layers. Response activities are focused on evicting individual threats or denying their access. The majority of security efforts focuses on the perimeter. ... Companies often handle both topics in different teams. Conferences and publications may feature both topics, but often focus on one and rarely address their interdependence. Security assessments like pentests and red team exercises sometimes include a physical component that tends to focus on social engineering without involving deep physical security expertise. ... Risks, especially in the form of human threat actors, will always look for the easiest way to materialize. Therefore, they will attack physical assets via their digital components and vice versa, if these flanks are not protected.


Architecting Agility: Decoupled Banking Systems Will Be the Key to Success in 2026

The banking industry is undergoing an evolutionary and market-driven shift. Digital banking systems, once rigid and monolithic, are being reimagined through decoupled architecture, AI-driven intelligence, programmatic technology consumption, and fintech innovation and partnerships. ... Delay is no longer an option — the future of banking is already being built today. To capitalize on these innovations, tech leaders must prioritize digital core banking agility, ensuring integration with new innovations and adapting to evolving market demands. ... Identify suspicious patterns in real time. As illustrated in the figure, a decoupled risk analytics gateway and prompt engine streamlines regulatory reporting and ensures adherence to evolving rules (regtech). Whitney Morgan, vice president at Skaleet, a fintech provider, states that generative AI takes this a notch further by automating regulatory reporting and accelerating product development. ... AI-enabled risk management empowers banks to detect anomalies across large translation datasets with the speed and accuracy that manual processes can’t match. Risk modeling and stress testing will enhance credit risk scoring, market risk simulations, and scenario analysis that drive preemptive and revenue options. ... The banking and financial services innovation race, with challenges in adoption and capturing market advantages, beckons leaders to be nimble and, at the same time, stay focused on the fundamentals. CIOs, CTOs, and other tech leaders can take proactive steps to strike the right balance.


Key Management Testing: The Most Overlooked Pillar of Crypto Security

The majority of security testing in crypto projects focuses on code correctness or operational attacks. Key management, however, is mainly considered a procedural issue rather than a technical problem. This is a dangerous false belief. Entropy sources, hardware integrity, and cryptographic integrity are key to generating. Ineffective randomness, broken device software or a corrupted environment may lead to keys that seem valid but are appallingly weak to attack. The testing mechanisms used to create new wallet addresses for users must be watertight when an exchange generates millions of new addresses. Testing should also be done on key storage. ... The recovery process is one of the most vulnerable areas of key management, yet it is discussed least. Backup and restoration are prone to human error, improperly configured storage, or unsafe transmission. The unfortunate fact about crypto is that recovery mechanisms can be either a saviour or a disaster. Recovery phrases, encrypted backups, and distributed shares need to be repeatedly tested in a real-world, adversarial environment. ... End-to-end lifecycle testing, automatic verification of key states, automated attack simulations and automated recovery protocols that self-heal will be the order of the day. The industry has already become such that key management is no longer a concealed or even supporting part of the security strategies. 


Inside the Chip: How Hardware Root of Trust Shifts the Odds Back to Cyber Defenders

Defenders often lack direct control or visibility into the hardware layer where workloads actually execute. This abstraction can obscure low-level threats, allowing attackers to manipulate telemetry, disable software protections, or persist beyond reboots. Crucially, modern attacks are not brute force attempts to break encryption or overwhelm defences. They exploit the assumptions built into how systems start, update, and prove what’s genuine. ... At the centre of this shift is Hardware Root of Trust (HRoT): a security architecture that embeds trust directly into the hardware layer of a device. US National Institute of Standards and Technology (NIST) defines it as “an inherently trusted combination of hardware and firmware that maintains the integrity of information.” In practice, HRoT serves as the anchor for system trust from the moment power is applied. ... For CISOs, HRoT represents an opportunity to strengthen resilience, meet regulatory demands, and finally realise true zero trust. From a resilience standpoint, it changes the balance between prevention and response. By validating integrity from power-on and continuously during operation, it reduces reliance on post-incident investigation and recovery. Compromised devices and systems are stopped early, limiting blast radius and disruption. Regulators are already reinforcing this direction. Frameworks such as the US Department of Defense’s CMMC explicitly highlight HRoT as a stronger foundation for assurance. 


What AI skills job seekers need to develop in 2026

One of the earliest AI skills involved prompt engineering — being able to get to the necessary AI-generated results by using the right questions. But that baseline skill is being pushed aside by “context engineering.” Think of context engineering as prompt engineering on steroids; it involves developing prompts that can deliver consistent and predictive answers. Ideally, “everytime you ask the same question, you always get the same answer,” said Bekir Atahan, vice president at Experis Services, a division of Manpower Group. That skill is critical because AI models are changing quickly, and the answers they spout out can differ from day to day. Context engineering is aimed at ensuring consistent outputs despite a rapidly evolving AI ecosystem. ... “Beyond algorithms and coding, the next wave of AI talent must bridge technology, governance and organizational change. The most valuable AI skill in 2026 isn’t coding, it’s building trust,” Seth said. Along those lines, he recommended that job seekers immerse themselves in the technology beyond simply taking a class. “Instead of a course, go to any conference,” Seth said. ... In hiring, genuine AI capability shows up through curiosity and real experience, Blackford said. “Strong candidates can talk honestly about something they tried, what did not work, and what they learned,” he said ... “Things are evolving at such a fast pace that there will be no perfect set of skills,” said Seth. “I would say more than skills, attitudes are more important — that adaptability to change, how quick you are to learn things.”

Daily Tech Digest - January 18, 2026


Quote for the day:

"Surround yourself with great people; delegate authority; get out of the way" -- Ronald Reagan



Data sovereignty: an existential issue for nations and enterprises

Law-making bodies have in recent years sought to regulate data flows to strengthen their citizens’ rights – for example, the EU bolstering individual citizens’ privacy through the General Data Protection Regulation (GDPR). This kind of legislation has redefined companies’ scope for storing and processing personal data. By raising the compliance bar, such measures are already reshaping C-level investment decisions around cloud strategy, AI adoption and third-party access to their corporate data. ... Faced with dynamic data sovereignty risks, enterprises have three main approaches ahead of them: First, they can take an intentional risk assessment approach. They can define a data strategy addressing urgent priorities, determining what data should go where and how it should be managed - based on key metrics such as data sensitivity, the nature of personal data, downstream impacts, and the potential for identification. Such a forward-looking approach will, however, require a clear vision and detailed planning. Alternatively, the enterprise could be more reactive and detach entirely from its non-domestic public cloud service providers. This is riskier, given the likely loss of access to innovation and, worse, the financial fallout that could undermine their pursuit of key business objectives. Lastly, leaders may choose to do nothing and hope that none of these risks directly affects them. This is the highest-risk option, leaving no protection from potentially devastating financial and reputational consequences of an ineffective data sovereignty strategy.


Verification Debt: When Generative AI Speeds Change Faster Than Proof

Software delivery has always lived with an imbalance. It is easier to change a system than to demonstrate that the change is safe under real workloads, real dependencies, and real failure modes. ... The risk is not that teams become careless. The risk is that what looks correct on the surface becomes abundant while evidence remains scarce. ... A useful name for what accumulates in the mismatch is verification debt. It is the gap between what you released and what you have demonstrated, with evidence gathered under conditions that resemble production, to be safe and resilient. Technical debt is a bet about future cost of change. Verification debt is unknown risk you are running right now. Here, verification does not mean theorem proving. It means evidence from tests, staged rollouts, security checks, and live production signals that is strong enough to block a release or trigger a rollback. It is uncertainty about runtime behavior under realistic conditions, not code cleanliness, not maintainability, and not simply missing unit tests. If you want to spot verification debt without inventing new dashboards, look at proxies you may already track. ... AI can help with parts of verification. It can suggest tests, propose edge cases, and summarize logs. It can raise verification capacity. But it cannot conjure missing intent, and it cannot replace the need to exercise the system and treat the resulting evidence as strong enough to change the release decision. Review is helpful. Review is evidence of readability and intent.


Executive-level CISO titles surge amid rising scope strain

Executive-level CISOs were more likely to report outside IT than peers with VP or director titles, according to the findings. The report frames this as part of a broader shift in how organisations place accountability for cyber risk and oversight. The findings arrive as boards and senior executives assess cyber exposure alongside other enterprise risks. The report links these expectations to the need for security leaders to engage across legal, risk, operations and other functions. ... Smaller organisations and industries with leaner security teams showed the highest levels of strain, the report says. It adds that CISOs warn these imbalances can delay strategic initiatives and push teams towards reactive security operations. The report positions this issue as a management challenge as well as a governance question. It links scope creep with wider accountability and higher expectations on security leaders, even where budgets and staffing remain constrained. ... Recruiters and employers have watched turnover trends closely as demand for senior security leadership has remained high across many sectors. The report suggests that title, scope and reporting structure form part of how CISOs evaluate roles. ... "The demand for experienced CISOs remains strong as the role continues to become more complex and more 'executive'," said Martano. "Understanding how organizations define scope, reporting structure, and leadership access and visibility is critical for CISOs planning their next move and for companies looking to hire or retain security leaders."


What’s in, and what’s out: Data management in 2026 has a new attitude

Data governance is no longer a bolt-on exercise. Platforms like Unity Catalog, Snowflake Horizon and AWS Glue Catalog are building governance into the foundation itself. This shift is driven by the realization that external governance layers add friction and rarely deliver reliable end-to-end coverage. The new pattern is native automation. Data quality checks, anomaly alerts and usage monitoring run continuously in the background. ... Companies want pipelines that maintain themselves. They want fewer moving parts and fewer late-night failures caused by an overlooked script. Some organizations are even bypassing pipes altogether. Zero ETL patterns replicate data from operational systems to analytical environments instantly, eliminating the fragility that comes with nightly batch jobs. ... Traditional enterprise warehouses cannot handle unstructured data at scale and cannot deliver the real-time capabilities needed for AI. Yet the opposite extreme has failed too. The highly fragmented Modern Data Stack scattered responsibilities across too many small tools. It created governance chaos and slowed down AI readiness. Even the rigid interpretation of Data Mesh has faded. ... The idea of humans reviewing data manually is no longer realistic. Reactive cleanup costs too much and delivers too little. Passive catalogs that serve as wikis are declining. Active metadata systems that monitor data continuously are now essential.


How Algorithmic Systems Automate Inequality

The deployment of predictive analytics in public administration is usually justified by the twin pillars of austerity and accuracy. Governments and private entities argue that automated decision-making systems reduce administrative bloat while eliminating the subjectivity of human caseworkers. ... This dynamic is clearest in the digitization of the welfare state. When agencies turn to machine learning to detect fraud, they rarely begin with a blank slate, training their models on historical enforcement data. Because low-income and minority populations have historically been subject to higher rates of surveillance and policing, these datasets are saturated with selection bias. The algorithm, lacking sociopolitical context, interprets this over-representation as an objective indicator of risk, identifying correlation and deploying it as causality. ... Algorithmic discrimination, however, is diffuse and difficult to contest. A rejected job applicant or a flagged welfare recipient rarely has access to the proprietary score that disqualified them, let alone the training data or the weighting variable—they face a black box that offers a decision without a rationale. This opacity makes it nearly impossible for an individual to challenge the outcome, effectively insulating the deploying organisation from accountability. ... Algorithmic systems do not observe the world directly; they inherit their view of reality from datasets shaped by prior policy choices and enforcement practices. To assess such systems responsibly requires scrutiny of the provenance of the data on which decisions are built and the assumptions encoded in the variables selected.


DevSecOps for MLOps: Securing the Full Machine Learning Lifecycle

The term "MLSecOps" sounds like consultant-speak. I was skeptical too. But after auditing ML pipelines at eleven companies over the past eighteen months, I've concluded we need the term because we need the concept — extending DevSecOps practices across the full machine learning lifecycle in ways that account for ML-specific threats. The Cloud Security Alliance's framework is useful here. Securing ML systems means protecting "the confidentiality, integrity, availability, and traceability of data, software, and models." That last word — traceability — is where most teams fail catastrophically. In traditional software, you can trace a deployed binary back to source code, commit hash, build pipeline, and even the engineer who approved the merge. ... Securing ML data pipelines requires adopting practices that feel tedious until the day they save you. I'm talking about data validation frameworks, dataset versioning, anomaly detection at ingestion, and schema enforcement like your business depends on it — because it does. Last September, I worked with an e-commerce company deploying a recommendation model. Their data pipeline pulled from fifteen different sources — user behavior logs, inventory databases, third-party demographic data. Zero validation beyond basic type checking. We implemented Great Expectations — an open-source data validation framework — as a mandatory CI check. 


Autonomous Supply Chains: Catalyst for Building Cyber-Resilience

Autonomous supply chains are becoming essential for building resilience amid rising global disruptions. Enabled by a strong digital core, agentic architecture, AI and advanced data-driven intelligence, together with IoT and robotics, they facilitate operations that continuously learn, adapt and optimize across the value chain. ... Conventional thinking suggests that greater autonomy widens the attack surface and diminishes human oversight turning it into a security liability. However, if designed with cyber resilience at its core, autonomous supply chain can act like a “digital immune system,” becoming one of the most powerful enablers of security. ... As AI operations and autonomous supply chains scale, traditional perimeter simply won’t work. Organizations must adopt a Zero Trust security model to eliminate implicit trust at every access point. A Zero Trust model, centered on AI-driven identity and access management, ensures continuous authentication, network micro-segmentation and controlled access across users, devices and partners. By enforcing “never trust, always verify,” organizations can minimize breach impact and contain attackers from freely moving across systems, maintaining control even in highly automated environments. ... Autonomy in the supply chain thrives on data sharing and connectivity across suppliers, carriers, manufacturers, warehouses and retailers, making end-to-end visibility and governance vital for both efficiency and security. 


When enterprise edge cases become core architecture

What matters most is not the presence of any single technology, but the requirements that come with it. Data that once lived in separate systems now must be consistent and trusted. Mobile devices are no longer occasional access points but everyday gateways. Hiring workflows introduce identity and access considerations sooner than many teams planned for. As those realities stack up, decisions that once arrived late in projects are moving closer to the start. Architecture and governance stop being cleanup work and start becoming prerequisites. ... AI is no longer layered onto finished systems. Mobile is no longer treated as an edge. Hiring is no longer insulated from broader governance and security models. Each of these shifts forces organizations to think earlier about data, access, ownership and interoperability than they are used to doing. What has changed is not just ambition, but feasibility. AI can now work across dozens of disparate systems in ways that were previously unrealistic. Long-standing integration challenges are no longer theoretical problems. They are increasingly actionable -- and increasingly unavoidable. ... As a result, integration, identity and governance can no longer sit quietly in the background. These decisions shape whether AI initiatives move beyond experimentation, whether access paths remain defensible and whether risk stays contained or spreads. Organizations that already have a clear view of their data, workflows and access models will find it easier to adapt. 


Why New Enterprise Architecture Must Be Built From Steel, Not Straw

Architecture must reflect future ambition. Ideally, architects build systems with a clear view of where the product and business are heading. When a system architecture is built for the present situation, it’s likely lacking in flexibility and scalability. That said, sound strategic decisions should be informed by well-attested or well-reasoned trends, not just present needs and aspirations. ... Tech leaders should avoid overcommitting to unproven ideas—i.e., not get "caught up" in the hype. Safe experimentation frameworks (from hypothesis to conclusion) reduce risk by carefully applying best practices to testing out approaches. In a business context with something as important as the technology foundation the organization runs in, do not let anyone mischaracterize this as timidity. Critical failure is a career-limiting move, and potentially an organizational catastrophe. ... The art lies in designing systems that can absorb future shifts without constant rework. That comes from aligning technical decisions not only with what the company is today, but also what it intends to become. Future-ready architecture isn’t the comparatively steady and predictable discipline it was before AI-enabled software features. As a consequence, there’s wisdom in staying directional, rather than architecting for the next five years. Align technical decisions with long-term vision but built with optionality wherever possible. 


Why Engineering Culture Is Everything: Building Teams That Actually Work

The culture is something that is a fact and it's also something intrinsic with human beings. We're people, we have a background. We were raised in one part of the world versus another. We have the way that we talk and things that we care about. All those things influence your team indirectly and directly. It's really important, you as a leader, to be aware that as an engineer, I use a lot of metaphors from monitoring and observability. We always talk about known knowns, known unknowns, and unknown unknowns. Those are really important to understand on a systems level, period, because your social technical system is also a system. The people that you work with, the way you work, your organization, it's a system. And if you're not aware of what are the metrics you need to track, what are the things that are threats to it, the good old strengths, weaknesses, opportunities, and threats. ... What we can learn from other industries is their lessons. Again, we are now on yet another industrial revolution. This time it's more of a knowledge revolution. We can learn from civil engineering like, okay, when the brick was invented, that was a revolution. When the brick was invented, what did people do in order to make sure that bricks matter? That's a fascinating and very curious story about the Freemasons. People forget the Freemasons were a culture about making sure that these constructions techniques, even more than the technologies, the techniques, were up to standards. 

Daily Tech Digest - January 17, 2026


Quote for the day:

"Success does not consist in never making mistakes but in never making the same one a second time." -- George Bernard Shaw



Expectations from AI ramp up as investors eye returns in 2026

Billions in investments and a concerted focus on the tech over the past few years has led to artificial intelligence (AI) completely transforming how major global industries work. Now, investors are finally expecting to see some returns. ... Investors will no longer be satisfied with AI’s potential future capabilities – they want measurable returns on investment (ROI), says Jiahao Sun, the CEO of Flock.ie, a platform that allows users to build, train and deploy AI models in a decentralised manner. AI investment is entering its “show me the money era”, he says. This isn’t to say that investments into AI will pause, but that investors will begin prioritising critical areas that give guaranteed returns. These could include agentic AI platforms that enable multi-agent orchestration; AI-native infrastructures built for scale, security and interoperability; data modernisation tools that unlock the full potential of unstructured data; and AI observability and safety tools that monitor, govern and refine agent behaviour in real time, explains Neeraj Abhyankar, the VP of Data and AI at R Systems. ... “Single-purpose tools will be absorbed into unified AI platforms. The era of juggling 10 different AI products is ending and the race to offer a complete, integrated experience will intensify,” he adds. Meanwhile, some experts say that the EU’s AI Act will – for better or for worse – prohibit European firms from experimenting with high-risk use cases for AI.


The Next S-Curve of Cybersecurity: Governing Trust in a New Converging Intelligence Economy

Cybersecurity has crossed a threshold where it no longer merely protects technology ~ it governs trust itself. In an era defined by AI-driven decision-making, decentralized financial systems, cloud-to-edge computing, and the approaching reality of quantum disruption, cyber risk is no longer episodic or containable. It is continuous, compounding, and enterprise-defining. What changed in 2025 wasn’t just the threat landscape. It was the architecture of risk. Identity replaced networks as the dominant attack surface. Software supply chains emerged as systemic liabilities. Machine intelligence ~ on both sides of the attack began evolving faster than the controls designed to govern it. For boards, investors, and executives, this marked the end of cybersecurity as a control function and the beginning of cybersecurity as a strategic mandate. ... The next S-curve of cybersecurity is not driven by better tooling. It is driven by a shift in how trust is architected and governed across a converging ecosystem. This new curve is defined by: Identity-centric security rather than network-centric defense; Data-aware protection instead of application-bound controls; Continuous assurance rather than point-in-time audits; and Integration with enterprise risk, governance, and capital strategy Cybersecurity evolves from a defensive posture into a trust architecture discipline ~ one that governs how intelligence, identity, data, and decisions interact at scale.


Why Mental Fitness Is Leadership's Next Frontier

The distinction Craze draws between mental health and mental fitness is crucial. Mental health, he explains, is ultimately about functioning—being sufficiently free from psychological injury or mental illness to show up and perform one's job. "Your mental health or illness is a private matter between yourself, and perhaps your family or physician, and is a matter of respecting your individual rights," he says. Mental fitness, by contrast, is about capacity. "Assuming you are mentally healthy enough to show up and perform your job, then mental fitness is all about how well your mind performs under load, over time, and in conditions of uncertainty," Craze explains. "Being mentally healthy is a baseline. Being mentally fit is what allows leaders to think clearly at hour ten, stay composed in conflict, and recover quickly after setbacks rather than slowly eroding away," he says. Here, the comparison to elite athletics is instructive. In professional sports, no one confuses being injury-free with being competition-ready. Leadership has been slower to make that distinction, even as today’s executives face sustained cognitive and emotional demands that would have been unthinkable a generation ago. ... One of the most persistent myths in leadership development, according to Craze, is the idea that thinking happens in some abstract cognitive space, detached from the body. "In reality, every act of judgment, attention and self-control has an underlying physiological component and cost," he says. 


Taking the Technical Leadership Path

Without technical alignment, individuals constantly touch the same codebase, adding their feature in the simplest way (for them) but often they do this without ensuring the codebase is kept consistent. Over time accidental complexity grows such as having five different libraries that do the same job, or seven different implementations of how an email or push notification is sent and when someone wants to make a future change to that area, their work is now much harder. ... There are plenty of resources available to develop leadership skills. Kua advised to break broader leadership skills into specific ones, such as coaching, mentoring, communicating, mediating, influencing, etc. Even when someone is not a formal leader, there are daily opportunities to practice these skills in the workplace, he said. ... Formal technical leaders are accountable for ensuring teams have enough technical leadership. One way of doing this is to cultivate an environment where everyone is comfortable stepping up and demonstrating technical leadership. When you do this well, this means everyone can demonstrate informal technical leadership. Formal leaders exist because not all teams are automatically healthy or high-performing. I’m sure every technical person can remember a team they’ve been on with two engineers constantly debating about which approach to take, and wish someone had stepped in to help the team reach a decision. In an ideal world, a formal leader wouldn’t be necessary, but it’s rare that teams live in the perfect world.


From model collapse to citation collapse: risks of over-reliance on AI in the academy

Model collapse is the slow erosion of a generative AI system grounded in reality as it learns more and more from machine-generated data rather than from human-generated content. As a result of model collapse, the AI model loses diversity in its outputs, reinforces its misconceptions, increases its confidence in its hallucinations and amplifies its biases. ... Among all the writing tasks involved in research, GenAI appears to be disproportionately good at writing literature reviews. ChatGPT and Google Gemini both have deep research features that try to take a deep dive into the literature on a topic, returning heavily sourced and relatively accurate syntheses of the related research, while typically avoiding the well-documented tendency to hallucinate sources altogether. In some ways, it should not be too surprising that these technologies thrive in this area because literature reviews are exactly the sort of thing GenAI should be good at: textual summaries that stay pretty close to the source material. But here is my major concern: while nothing is fundamentally wrong with the way GenAI surfaces sources for literature reviews, it risks exacerbating the citation Matthew effect that tools like Google Scholar have caused. Modern AI models largely thrive on a snapshot of the internet circa 2022. In fact, I suspect that verifiably pre-2022 datasets will become prized sources for future models, largely untainted by AI-generated content, in much the same way that pre-World War II steel is prized for its lack of radioactive contamination from nuclear testing. 


Why is Debugging Hard? How to Develop an Effective Debugging Mindset

Here’s how most developers debug code: Something is broken; Let me change the line; Let’s refresh (wishing the error would go away); Hmm… still broken!; Now, let me add a console.log(); Let me refresh again (Ah, this time it may…); Ok, looks like this time it worked! This is reaction-based debugging. It’s like throwing a stone in the dark or finding a needle in a haystack. It feels busy, it sounds productive, but it’s mostly guessing. And guessing doesn’t scale in programming. This approach and the guessing mindset make debugging hard for developers. The lack of a methodology and solid approach makes many devs feel helpless and frustrated, which makes the process feel much more difficult than coding. This is why we need a different mental model, a defined skillset to master the art of debugging. ... Good debuggers don’t fight bugs. They investigate them. They don’t start with the mindset of “How do I fix this?”. They start with, “Why must this bug exist?” This one question changes everything. When you ask about the existence of a bug, you go back to the history to collect information about the code, its changes, and its flow. Then, you feed this information through a “mental model” to make decisions that lead you to the fix. ... Once the facts are clear and assumptions are visible, the debugging makes its way forward. Now you’ll need to form a hypothesis. A hypothesis is a simple cause-and-effect statement: If this assumption is wrong, then the behaviour makes sense. If not, provide a fix.


Promptware Kill Chain – Five-Step Kill Chain Model for Analyzing Cyberthreats

While the security industry has focused narrowly on prompt injection as a catch-all term, the reality is far more complex. Attacks now follow systematic, sequential patterns: initial access through malicious prompts, privilege escalation by bypassing safety constraints, establishing persistence in system memory, moving laterally across connected services, and finally executing their objectives. This mirrors how traditional malware campaigns unfold, suggesting that conventional cybersecurity knowledge can inform AI security strategies. ... The promptware kill chain begins with Initial Access, where attackers insert malicious instructions through prompt injection—either directly from users or indirectly through poisoned documents retrieved by the system. The second phase, Privilege Escalation, involves jailbreaking techniques that bypass safety training designed to refuse harmful requests. ... Traditional malware achieves persistence through registry modifications or scheduled tasks. Promptware exploits the data stores that LLM applications depend on. Retrieval-dependent persistence embeds payloads in data repositories like email systems or knowledge bases, reactivating when the system retrieves similar content. Even more potent is retrieval-independent persistence, which targets the agent’s memory directly, ensuring the malicious instructions execute on every interaction regardless of user input.


AI SOC Agents Are Only as Good as the Data They Are Fed

If your telemetry is fragmented, your schemas are inconsistent, or your context is missing, you won’t get faster responses from AI SOC agents. You’ll just get faster mistakes. These agents are being built to excel at cybersecurity analysis and decision support. They are not constructed to wrangle data collection, cleansing, normalization, and governance across dozens of sources. ... Modern SOCs integrate telemetry from EDRs, cloud providers, identity, networks, SaaS apps, data lakes, and more. Normalizing all that into a common schema eliminates the constant “translation tax.” An agent that can analyze standardized fields once, and doesn’t have to re-learn CrowdStrike vs. Splunk Search Processing Language vs. vendor-specific JavaScript Object Notation, will make faster, more reliable decisions. ... If the agent must “crawl back” into five source systems to enrich an alert on its own, latency spikes and success rates drop. The right move is to centralize, normalize, and clean security data into an accessible store, like a data lake, for your AI SOC agents and continue streaming a distilled, security-relevant subset to the Security Information and Event Management (SIEM) platform for detections and cybersecurity analysts. Let the SIEM be the place where detections originate; let the lake be the place your agents do their deep thinking. The problem is that the industry’s largest SIEM, Endpoint Detection and Response (EDR), and Security Orchestration, Automation, and Response (SOAR) platforms are consolidating into vertically integrated ecosystems. ...”


IT portfolio management: Optimizing IT assets for business value

The enterprise’s most critical systems for conducting day-to-day business are a category unto themselves. These systems may be readily apparent, or hidden deep in a technical stack. So all assets should be evaluated as to how mission-critical they are. ... The goal of an IT portfolio is to contain assets that are presently relevant and will continue to be relevant well into the future. Consequently, asset risk should be evaluated for each IT resource. Is the resource at risk for vendor sunsetting or obsolescence? Is the vendor itself unstable? Does IT have the on-staff resources to continue running a given system, no matter how good it is (a custom legacy system written in COBOL and Assembler, for example)? Is a particular system or piece of hardware becoming too expense to run? Do existing IT resources have a clear path to integration with the new technologies that will populate IT in the future? ... Is every IT asset pulling its weight? Like monetary and stock investments, technologies under management must show they are continuing to produce measurable and sustainable value. The primary indicators of asset value that IT uses are total cost of ownership (TCO) and return on investment (ROI). TCO is what gauges the value of an asset over time. For instance, investments in new servers for the data center might have paid off four years ago, but now the data center has an aging bay of servers with obsolete technology and it is cheaper to relocate compute to the cloud.


Ransomware activity never dies, it multiplies

One of the most significant findings in the study involves extortion campaigns that do not rely on encryption. These attacks focus on stealing data and threatening to publish it, skipping the deployment of ransomware entirely. Encryption based attacks remained just above 4,700 incidents annually. When data theft extortion is included, total extortion incidents reached 6,182 in 2025. That represents a 23% increase compared with 2024. Snakefly, which runs the Cl0p ransomware operation, played a major role in this shift. These actors exploited vulnerabilities in widely used enterprise software to extract data at scale. Victims included large organizations in government and industry, with some campaigns affecting hundreds of companies through a single flaw. ... A newer ransomware strain tracked as Warlock drew attention due to its tooling and infrastructure. First observed in mid 2025, Warlock attacks exploited a zero day vulnerability in Microsoft SharePoint and used DLL sideloading for payload delivery. Analysis linked Warlock to tooling previously associated with Chinese espionage activity, including signed drivers and custom command frameworks. Some ransomware payloads appeared to be modified versions of leaked LockBit code, combined with older malware components. The study notes overlaps between ransomware activity and long running espionage campaigns, where ransomware deployment may serve operational or financial goals within broader intrusion efforts.