Daily Tech Digest - January 17, 2026


Quote for the day:

"Success does not consist in never making mistakes but in never making the same one a second time." -- George Bernard Shaw



Expectations from AI ramp up as investors eye returns in 2026

Billions in investments and a concerted focus on the tech over the past few years has led to artificial intelligence (AI) completely transforming how major global industries work. Now, investors are finally expecting to see some returns. ... Investors will no longer be satisfied with AI’s potential future capabilities – they want measurable returns on investment (ROI), says Jiahao Sun, the CEO of Flock.ie, a platform that allows users to build, train and deploy AI models in a decentralised manner. AI investment is entering its “show me the money era”, he says. This isn’t to say that investments into AI will pause, but that investors will begin prioritising critical areas that give guaranteed returns. These could include agentic AI platforms that enable multi-agent orchestration; AI-native infrastructures built for scale, security and interoperability; data modernisation tools that unlock the full potential of unstructured data; and AI observability and safety tools that monitor, govern and refine agent behaviour in real time, explains Neeraj Abhyankar, the VP of Data and AI at R Systems. ... “Single-purpose tools will be absorbed into unified AI platforms. The era of juggling 10 different AI products is ending and the race to offer a complete, integrated experience will intensify,” he adds. Meanwhile, some experts say that the EU’s AI Act will – for better or for worse – prohibit European firms from experimenting with high-risk use cases for AI.


The Next S-Curve of Cybersecurity: Governing Trust in a New Converging Intelligence Economy

Cybersecurity has crossed a threshold where it no longer merely protects technology ~ it governs trust itself. In an era defined by AI-driven decision-making, decentralized financial systems, cloud-to-edge computing, and the approaching reality of quantum disruption, cyber risk is no longer episodic or containable. It is continuous, compounding, and enterprise-defining. What changed in 2025 wasn’t just the threat landscape. It was the architecture of risk. Identity replaced networks as the dominant attack surface. Software supply chains emerged as systemic liabilities. Machine intelligence ~ on both sides of the attack began evolving faster than the controls designed to govern it. For boards, investors, and executives, this marked the end of cybersecurity as a control function and the beginning of cybersecurity as a strategic mandate. ... The next S-curve of cybersecurity is not driven by better tooling. It is driven by a shift in how trust is architected and governed across a converging ecosystem. This new curve is defined by: Identity-centric security rather than network-centric defense; Data-aware protection instead of application-bound controls; Continuous assurance rather than point-in-time audits; and Integration with enterprise risk, governance, and capital strategy Cybersecurity evolves from a defensive posture into a trust architecture discipline ~ one that governs how intelligence, identity, data, and decisions interact at scale.


Why Mental Fitness Is Leadership's Next Frontier

The distinction Craze draws between mental health and mental fitness is crucial. Mental health, he explains, is ultimately about functioning—being sufficiently free from psychological injury or mental illness to show up and perform one's job. "Your mental health or illness is a private matter between yourself, and perhaps your family or physician, and is a matter of respecting your individual rights," he says. Mental fitness, by contrast, is about capacity. "Assuming you are mentally healthy enough to show up and perform your job, then mental fitness is all about how well your mind performs under load, over time, and in conditions of uncertainty," Craze explains. "Being mentally healthy is a baseline. Being mentally fit is what allows leaders to think clearly at hour ten, stay composed in conflict, and recover quickly after setbacks rather than slowly eroding away," he says. Here, the comparison to elite athletics is instructive. In professional sports, no one confuses being injury-free with being competition-ready. Leadership has been slower to make that distinction, even as today’s executives face sustained cognitive and emotional demands that would have been unthinkable a generation ago. ... One of the most persistent myths in leadership development, according to Craze, is the idea that thinking happens in some abstract cognitive space, detached from the body. "In reality, every act of judgment, attention and self-control has an underlying physiological component and cost," he says. 


Taking the Technical Leadership Path

Without technical alignment, individuals constantly touch the same codebase, adding their feature in the simplest way (for them) but often they do this without ensuring the codebase is kept consistent. Over time accidental complexity grows such as having five different libraries that do the same job, or seven different implementations of how an email or push notification is sent and when someone wants to make a future change to that area, their work is now much harder. ... There are plenty of resources available to develop leadership skills. Kua advised to break broader leadership skills into specific ones, such as coaching, mentoring, communicating, mediating, influencing, etc. Even when someone is not a formal leader, there are daily opportunities to practice these skills in the workplace, he said. ... Formal technical leaders are accountable for ensuring teams have enough technical leadership. One way of doing this is to cultivate an environment where everyone is comfortable stepping up and demonstrating technical leadership. When you do this well, this means everyone can demonstrate informal technical leadership. Formal leaders exist because not all teams are automatically healthy or high-performing. I’m sure every technical person can remember a team they’ve been on with two engineers constantly debating about which approach to take, and wish someone had stepped in to help the team reach a decision. In an ideal world, a formal leader wouldn’t be necessary, but it’s rare that teams live in the perfect world.


From model collapse to citation collapse: risks of over-reliance on AI in the academy

Model collapse is the slow erosion of a generative AI system grounded in reality as it learns more and more from machine-generated data rather than from human-generated content. As a result of model collapse, the AI model loses diversity in its outputs, reinforces its misconceptions, increases its confidence in its hallucinations and amplifies its biases. ... Among all the writing tasks involved in research, GenAI appears to be disproportionately good at writing literature reviews. ChatGPT and Google Gemini both have deep research features that try to take a deep dive into the literature on a topic, returning heavily sourced and relatively accurate syntheses of the related research, while typically avoiding the well-documented tendency to hallucinate sources altogether. In some ways, it should not be too surprising that these technologies thrive in this area because literature reviews are exactly the sort of thing GenAI should be good at: textual summaries that stay pretty close to the source material. But here is my major concern: while nothing is fundamentally wrong with the way GenAI surfaces sources for literature reviews, it risks exacerbating the citation Matthew effect that tools like Google Scholar have caused. Modern AI models largely thrive on a snapshot of the internet circa 2022. In fact, I suspect that verifiably pre-2022 datasets will become prized sources for future models, largely untainted by AI-generated content, in much the same way that pre-World War II steel is prized for its lack of radioactive contamination from nuclear testing. 


Why is Debugging Hard? How to Develop an Effective Debugging Mindset

Here’s how most developers debug code: Something is broken; Let me change the line; Let’s refresh (wishing the error would go away); Hmm… still broken!; Now, let me add a console.log(); Let me refresh again (Ah, this time it may…); Ok, looks like this time it worked! This is reaction-based debugging. It’s like throwing a stone in the dark or finding a needle in a haystack. It feels busy, it sounds productive, but it’s mostly guessing. And guessing doesn’t scale in programming. This approach and the guessing mindset make debugging hard for developers. The lack of a methodology and solid approach makes many devs feel helpless and frustrated, which makes the process feel much more difficult than coding. This is why we need a different mental model, a defined skillset to master the art of debugging. ... Good debuggers don’t fight bugs. They investigate them. They don’t start with the mindset of “How do I fix this?”. They start with, “Why must this bug exist?” This one question changes everything. When you ask about the existence of a bug, you go back to the history to collect information about the code, its changes, and its flow. Then, you feed this information through a “mental model” to make decisions that lead you to the fix. ... Once the facts are clear and assumptions are visible, the debugging makes its way forward. Now you’ll need to form a hypothesis. A hypothesis is a simple cause-and-effect statement: If this assumption is wrong, then the behaviour makes sense. If not, provide a fix.


Promptware Kill Chain – Five-Step Kill Chain Model for Analyzing Cyberthreats

While the security industry has focused narrowly on prompt injection as a catch-all term, the reality is far more complex. Attacks now follow systematic, sequential patterns: initial access through malicious prompts, privilege escalation by bypassing safety constraints, establishing persistence in system memory, moving laterally across connected services, and finally executing their objectives. This mirrors how traditional malware campaigns unfold, suggesting that conventional cybersecurity knowledge can inform AI security strategies. ... The promptware kill chain begins with Initial Access, where attackers insert malicious instructions through prompt injection—either directly from users or indirectly through poisoned documents retrieved by the system. The second phase, Privilege Escalation, involves jailbreaking techniques that bypass safety training designed to refuse harmful requests. ... Traditional malware achieves persistence through registry modifications or scheduled tasks. Promptware exploits the data stores that LLM applications depend on. Retrieval-dependent persistence embeds payloads in data repositories like email systems or knowledge bases, reactivating when the system retrieves similar content. Even more potent is retrieval-independent persistence, which targets the agent’s memory directly, ensuring the malicious instructions execute on every interaction regardless of user input.


AI SOC Agents Are Only as Good as the Data They Are Fed

If your telemetry is fragmented, your schemas are inconsistent, or your context is missing, you won’t get faster responses from AI SOC agents. You’ll just get faster mistakes. These agents are being built to excel at cybersecurity analysis and decision support. They are not constructed to wrangle data collection, cleansing, normalization, and governance across dozens of sources. ... Modern SOCs integrate telemetry from EDRs, cloud providers, identity, networks, SaaS apps, data lakes, and more. Normalizing all that into a common schema eliminates the constant “translation tax.” An agent that can analyze standardized fields once, and doesn’t have to re-learn CrowdStrike vs. Splunk Search Processing Language vs. vendor-specific JavaScript Object Notation, will make faster, more reliable decisions. ... If the agent must “crawl back” into five source systems to enrich an alert on its own, latency spikes and success rates drop. The right move is to centralize, normalize, and clean security data into an accessible store, like a data lake, for your AI SOC agents and continue streaming a distilled, security-relevant subset to the Security Information and Event Management (SIEM) platform for detections and cybersecurity analysts. Let the SIEM be the place where detections originate; let the lake be the place your agents do their deep thinking. The problem is that the industry’s largest SIEM, Endpoint Detection and Response (EDR), and Security Orchestration, Automation, and Response (SOAR) platforms are consolidating into vertically integrated ecosystems. ...”


IT portfolio management: Optimizing IT assets for business value

The enterprise’s most critical systems for conducting day-to-day business are a category unto themselves. These systems may be readily apparent, or hidden deep in a technical stack. So all assets should be evaluated as to how mission-critical they are. ... The goal of an IT portfolio is to contain assets that are presently relevant and will continue to be relevant well into the future. Consequently, asset risk should be evaluated for each IT resource. Is the resource at risk for vendor sunsetting or obsolescence? Is the vendor itself unstable? Does IT have the on-staff resources to continue running a given system, no matter how good it is (a custom legacy system written in COBOL and Assembler, for example)? Is a particular system or piece of hardware becoming too expense to run? Do existing IT resources have a clear path to integration with the new technologies that will populate IT in the future? ... Is every IT asset pulling its weight? Like monetary and stock investments, technologies under management must show they are continuing to produce measurable and sustainable value. The primary indicators of asset value that IT uses are total cost of ownership (TCO) and return on investment (ROI). TCO is what gauges the value of an asset over time. For instance, investments in new servers for the data center might have paid off four years ago, but now the data center has an aging bay of servers with obsolete technology and it is cheaper to relocate compute to the cloud.


Ransomware activity never dies, it multiplies

One of the most significant findings in the study involves extortion campaigns that do not rely on encryption. These attacks focus on stealing data and threatening to publish it, skipping the deployment of ransomware entirely. Encryption based attacks remained just above 4,700 incidents annually. When data theft extortion is included, total extortion incidents reached 6,182 in 2025. That represents a 23% increase compared with 2024. Snakefly, which runs the Cl0p ransomware operation, played a major role in this shift. These actors exploited vulnerabilities in widely used enterprise software to extract data at scale. Victims included large organizations in government and industry, with some campaigns affecting hundreds of companies through a single flaw. ... A newer ransomware strain tracked as Warlock drew attention due to its tooling and infrastructure. First observed in mid 2025, Warlock attacks exploited a zero day vulnerability in Microsoft SharePoint and used DLL sideloading for payload delivery. Analysis linked Warlock to tooling previously associated with Chinese espionage activity, including signed drivers and custom command frameworks. Some ransomware payloads appeared to be modified versions of leaked LockBit code, combined with older malware components. The study notes overlaps between ransomware activity and long running espionage campaigns, where ransomware deployment may serve operational or financial goals within broader intrusion efforts.

Daily Tech Digest - January 16, 2026


Quote for the day:

"Common sense is something that everyone needs, few have, and none think they lack" -- Benjamin Franklin



If you think agentic AI is a challenge, you’re not ready for what’s coming

The convergence of technology is happening all at once. You’ve got new processes being put in place while simultaneously replacing legacy infrastructure. You’ve got new technology, new talent being rolled into this convergence. Meanwhile, physical AI and quantum are coming quickly on top of agentic. Adaptability is the new job security. The ability to adapt is the most important skill for employees and the most important organizational differentiator. Organizations that can adapt quickly to new technology, redefining processes and training — that’s how they’ll differentiate. The ones that can’t will fall behind. ... It’s becoming not a technology issue as much as a business and process issue. The technology — whether AI, agentic AI, physical AI, or quantum — mostly exists to solve today’s problems. The issue is training, people, and adoption. ... Some industries, like financial services and healthcare [and] precision medicine — financial services has over-invested for decades in data and data quality for compliance reasons. They can use it for AI and quantum. Precision medicine is another category with high data quality. But without the right data, infrastructure, and sandbox, you’ll spread yourself too thin. You may try things, but it doesn’t get you value. Without a defined use case and focus area, you create innovation theater. Companies are getting focused on that first step: What use case am I trying to solve? 


AI Is Compressing the Coding Layer: Here's What Developers Do Next

One of the most encouraging developments in 2025 has been AI's ability to accelerate developer progression and skill growth. In our Q4 survey, 74% of developers said AI strengthened their technical skills. As lower-level execution becomes increasingly automated, developers who can work across systems, evaluate tradeoffs, and guide AI-driven workflows are progressing faster than in previous cycles. ... More than half (55%) also expect AI proficiency to accelerate progression and compensation. This reflects a rising demand for talent that can pair technical depth with architectural and systems thinking. ... Engineering teams are beginning to resemble higher-skill strategic units with stronger cross-functional alignment and architectural leadership. 58% of developers expect teams to become smaller and leaner next year as entry-level coding tasks are increasingly automated. Similarly, more than half (58%) of project managers report that 10-30% of project tasks could be handled by AI-driven workflows in 2026, including documentation generation, automated testing, code completion/refactoring, and requirements/user story drafting. These aren't the most visible tasks, but they've historically consumed a disproportionate share of time. ... To thrive in 2026 and beyond, developers should build competency in orchestrating AI workflows, invest in architectural and systems design literacy, and strengthen their fluency in data engineering, security, and cloud foundations.


Insider risk in an age of workforce volatility

Economic pressures, AI-driven job displacement, and relentless organizational churn are driving insider risk to its highest level in years. Workforce instability erodes loyalty and heightens grievances. The accelerating deployment of powerful new tools, such as AI agents, amplifies the threats from within, both human and machine. ... This surge, up significantly from prior years, creates fertile ground for disgruntlement: financial stress, resentment over automation, and opportunistic behavior, from negligence and careless data handling to deliberate malevolent actions like data exfiltration and credential monetization. ... They are becoming exploitable vectors for silent data exfiltration, disruption, or unintended catastrophe. This is particularly concerning when volatility reduces human oversight and rushes deployment without commensurate controls. Palo Alto Networks’ 2026 cybersecurity predictions emphasize that these agents introduce vulnerabilities such as goal hijacking, tool misuse, prompt injection, and shadow deployment, often amplified by the very churn that drives their adoption across multinational organizations. Security leaders are taking note. ... There is no doubt that such anxiety from ongoing layoffs and role uncertainty can lead to nervous mistakes, privilege hoarding, or rushed workarounds that expose data without intent to harm. Yet harm is actualized. The result is a heightened insider risk landscape that is amplified when the interplay between human churn and machine proliferation is overlooked.


Creating Trust Through Data Is a Long Game — Advantage Solutions CDO

“Trust starts with the rapport with individuals. It starts with listening. It doesn’t start with building solutions.” She highlights that facts alone don’t solve decision-making challenges. Business intuition still matters — but it must be balanced with truth derived from data. “Sometimes the facts alone aren’t enough. There’s a balance between data and the business-led gut experience. All of it is important.” Trust requires time, consistency, and transparency. ... O’Hazo frames AI not as a disruption, but as a spotlight. “AI is almost spotlighting the need for foundational data.” The reason: modern organizations need to answer multidimensional questions, not isolated ones. “It’s no longer a singular flat question. It’s ‘How is X related to Y, and what are the factors that drive growth?’ To answer that, you need data from so many different functions organized and architected the right way.” This interconnection does more than support analytics; it transforms relationships across the business. “When you start to interconnect the data, you naturally and organically have meaningful conversations across functions.” ... Turajski raises the common phrase “source of truth,” asking whether AI has changed how organizations think about it. O’Hazo’s response is clear: AI doesn’t rewrite the rules; it reveals the gaps. “AI is spotlighting, sometimes unfavorably, where the pre-work on the data foundation hasn’t accelerated enough.” This wake-up call has elevated data readiness to board-level priority.


The workforce shift — why CIOs and people leaders must partner harder than ever

For the last decade or so, digital transformation has been framed as a technology challenge. New platforms. Cloud migrations. Data lakes. APIs. Automation. Security layered on top. It was complex, often messy and rarely finished — but the underlying assumption stayed the same: Humans remained at the center of work, with technology enabling them. ... AI is just technology. But it feels human because it has been designed to interact with us in human ways. Large language models combined with domain data create the illusion that AI can do anything. Maybe one day it will. Right now, what it can do is expose how unprepared most organizations are for the scale and pace of change it brings. We are all chasing competitive advantages — revenue growth, margin improvement, improving resilience — and AI is being positioned as the shortcut. But unlike previous waves of automation, this one does not sit neatly inside a single function. ... Perception becomes reality very quickly inside organizations. If people believe AI is a colleague, what does that mean for accountability, trust and decision-making? Who owns outcomes when work is split between humans and machines? These are not abstract questions — they show up in performance, morale and risk. ... For years, organizations have layered technology on top of broken processes. Sometimes that was a conscious trade-off to move faster. Sometimes it was avoidance. Either way, humans could usually compensate.


CIO Playbook for Post-Quantum Security

While the scope of migration to post-quantum cryptography can be daunting, CIOs can follow several practical steps to make the project more manageable, said Sandy Carielli, vice president and principal analyst at Forrester. "There's a process here that's going to need to be addressed in order to get to where the organization needs to be," she said. "Discover, prioritize, remediate and add cryptographic agility." One of the biggest misconceptions she sees from CIOs is on what being ready for quantum-resistant security means. "Sometimes people have the misconception that you need a quantum computer for quantum security," Carielli said. "You don't need quantum computers. And, in fact, you're not going to. You're doing this to be protected." ... Designing for crypto agility is the final step in the process, and organizations should strive to create systems so that algorithm changes necessitate configuration changes, not re-architecting. "Good for crypto agility means that the next time an algorithm is broken, we are able to adapt to that by changing a configuration. We're able to adapt in a matter of weeks, rather than a matter of years," Carielli said. The regulatory impact should make quantum migration an easier sell than it would have been even a few years ago, as deadlines loom in the United States, Australia, EU and Asia countries. "Regardless of when a quantum computer is going to be able to break today's cryptography, we are being asked to migrate by the organizations and the countries that we want to do business with," Carielli said.


When your platform team can’t say yes: How away-teaming unlocks stuck roadmaps

Away teaming inverts the traditional model. Instead of platform engineers embedding with product teams to provide expertise, product engineers temporarily join platform teams to build required capabilities under platform guidance. ... Product teams have already secured funding for their initiatives. Away teaming redirects that investment from building a product-specific solution into creating a reusable platform capability. For platform teams, this expands effective capacity without headcount growth. Platform engineers provide design review, answer questions and conduct code review. ... Product engineers need to view away teaming as a growth opportunity, not a sacrifice. Frame it explicitly as platform engineering experience that builds broader systems thinking skills and deepens architectural understanding. ... Away teaming works best for capabilities in the middle ground: too product-specific for immediate platform prioritization, yet general enough that future products will benefit from reuse. Away teaming also has scale limits. A platform team might effectively support two concurrent away team engagements. Beyond that, guidance capacity becomes strained. ... Product engineers who complete away team assignments become platform advocates. They understand the architectural tradeoffs and can credibly explain platform limitations, reducing tension and frustration between teams.


Forget Predictions: True 2026 Cybersecurity Priorities From Leaders

Most organizations, large and small, are inundated with manual tasks, which makes many of our processes very expensive. This is compounded by economic forces that many organizations face today, which limits their ability to hire additional staff. For years, the industry has been working to solve these problems with SOAR, RPA Bots, or other programmatic solutions to do this bulk work. I think the use of AI extends the work we have already done in that space, but in a broader application. ... The promise of SOAR is centralized orchestration. The reality is months of costly, brittle integration work that breaks with every vendor update. We spend more time maintaining the automation pipeline than the pipeline saves us. We don’t have enough people who can build, train, and maintain sophisticated AI/ML models while understanding threat hunting. The technology requires a new, hyper-specialized skill set, defeating the goal of efficiency. The single most impactful shift for efficiency in 2026 will be the Process and People shift toward Radical Simplification and Security Accountability Diffusion. ... “The shift I’m pushing for is toward collaborative intelligence that actually tells us which threats matter for our specific environment. Context is king here, and I’m encouraged by the emergence of solutions that analyze signals across multiple organizations to provide internet-wide defense. But this only works if we’re all willing to put in what we want to get out of it, meaning reliably sharing intelligence with peers and industry groups, not just consuming it.


DCI launches digital identity interoperability standards for social protection

Authorities are increasingly leveraging digital identification systems to achieve this goal and ensure their social protection (SP) programs are inclusive. ... These open standards provide a trusted mechanism for social protection systems to authenticate individuals and request verified identity data, such as demographic attributes or authentication tokens, in a privacy-preserving way. The standards are not about building ID systems themselves or about integrating with health or education platforms, DCI emphasized. Rather, they’re focused squarely on enabling interoperability between ID and social protection systems. This includes supporting social registries, integrated beneficiary registries and other SP platforms “to connect meaningfully and securely with ID systems.” DCI said the release culminates months of research, peer review and collaboration by a standards committee comprising experts from 20 organizations. By establishing a common technical language, the initiative aims to strengthen digital public infrastructure and foster greater trust in the delivery of social protection programs. ... “Digital transformation of social protection is not an end in itself and it’s not only about cutting costs,” said ILO director Shahra Razavi. “It is about making sure everyone has access to benefits and services, particularly those most at risk of vulnerability and exclusion.”


Data Governance in the AI Era: Are We Solving the Wrong Problem?

The foundation of any effective AI governance model starts with visibility and control. Create a living list of sanctioned AI tools tied to enterprise accounts like personal accounts and shadow IT. Once you have that visibility, it’d be right to require all AI usage through company-issued credentials, ensuring every login is accountable and logged. Users authenticate through your identity provider, and audit trails capture usage patterns. When you can trace who accessed which tool and when, you can create records that support both compliance requirements and incident investigation. ... One of the biggest mistakes organizations make is treating all data the same way, imposing blanket bans that create friction without proportional security benefit. A more effective approach classifies data by sensitivity level and creates rules aligned with that classification. ... If your policy today looks like a wall of “no,” you’re probably protecting yourself from the wrong consequence. The real risk isn’t that AI will suddenly go rogue, it’s more likely that your people will use it without guidance, visibility, or control. Unmanaged adoption creates the very data leakage you’re trying to prevent. And with managed adoption, through clear policy and good governance, creates visibility, accountability, and the ability to detect and respond to actual incidents. Data professionals occupy a critical position in this conversation, they own the data architecture, the classification systems, and the audit trails that make AI governance possible. 

Daily Tech Digest - January 15, 2026


Quote for the day:

"You have to have your heart in the business and the business in your heart." -- An Wang


AI agents can talk — orchestration is what makes them work together

“Agent-to-agent communications is emerging as a really big deal,” G2’s chief innovation officer Tim Sanders told VentureBeat. “Because if you don't orchestrate it, you get misunderstandings, like people speaking foreign languages to each other. Those misunderstandings reduce the quality of actions and raise the specter of hallucinations, which could be security incidents or data leakage.” ... In another critical evolution in the agentic era, human evaluators will become designers, moving from human-in-the-loop to human-on-the-loop, according to Sanders. That is: They will begin designing agents to automate workflows. Agent builder platforms continue to innovate their no-code solutions, Sanders said, meaning nearly anyone can now stand up an agent using natural language. “This will democratize agentic AI, and the super skill will be the ability to express a goal, provide context and envision pitfalls, very similar to a good people manager today.” ... Organizations should begin “expeditious programs” to infuse agents across workflows, especially with highly repetitive work that poses bottlenecks. Likely at first, there will be a strong human-in-the-loop element to ensure quality and promote change management. “Serving as an evaluator will strengthen the understanding of how these systems work,” Sanders said, “and eventually enable all of us to operate upstream in agentic workflows instead of downstream.”


Integrating AI-Enhanced Microservices in SAFe 5.0 Framework

AI-driven microservices can be a game-changer for Lean Portfolio Management within SAFe. By optimizing decision analytics and enhancing value stream performance, AI simplifies, rather than complicates. I know what you’re thinking: AI tools can add complexity. One client put this to the test, and we found AI helped reduce the noise. It sliced through the data smog to identify hidden value streams and automate mundane tasks like financial forecasting and risk management. ... Integrating decentralized AI models into SAFe’s ARTs can significantly enhance their autonomy. During a high-stakes project, we shifted from a centralized to a decentralized model, which allowed ARTs to self-optimize and adapt to shifting priorities seamlessly. It was like giving ARTs a brain of their own. Decentralized AI models reduce the bottlenecks you'd typically encounter in centralized systems. Think of the ARTs as small startups within the larger enterprise ecosystem, each capable of making swift, informed decisions. ... This isn’t just a tech enthusiast's dream—it's an emerging reality. The maturity of AI technologies spells a future where enterprises aren’t just keeping up; they’re setting the pace. So, if there’s a single, actionable insight to glean from my journey, it’s this: enterprises need to actively pursue cross-industry collaborations, invest in AI-powered microservices, and hone their Agile professionals’ skill sets.


Incorporating Geopolitical Risk Into Your IT Strategy

IT organizations know how to plan for unexpected outages, but even the most rigorously designed strategy is vulnerable to the shifting winds of geopolitics. CIOs and technology leaders need to know how their organizations will respond to geopolitical disruptions, and scenario planning needs to be a priority. ... "The IT department can treat geopolitical disruption as an expected operational variable rather than an unforeseen catastrophe. Good and tested enterprise risk management frameworks, investment in government affairs partnerships and ongoing board engagement should start to manage and prepare for this," Dixon said. CIOs need to do scenario modeling around the risks facing their enterprise, and evaluate how IT is teaming with business units, security teams and the CISO on a cohesive tech strategy that builds security, including artificial intelligence security, in from the ground up, said Sean Joyce ... "You're as strong as your weakest link," Joyce said. "As geopolitical risk becomes more prominent, you're going to see tools like cyber being leveraged by countries, particularly those that don't have stronger military or other capabilities. For some, it may be the only tool they can leverage." Physical infrastructure, geography and power supplies are also now areas of risk CIOs need to consider, and infrastructure strategy must align with sustainability, energy realities and geopolitical stability. 


Six Architecture Challenges for Startups

The risk is not that the first version is imperfect; that is inevitable. The risk is that the team keeps layering new functionality on top of an accidental architecture. At some point, the cost of change becomes so high that every small modification feels dangerous. The architectural challenge is to intentionally decide where to accept debt and where to invest in structure. Startups need a minimal set of principles – for example, clear domain boundaries, basic API hygiene, and a simple deployment model – that allow speed without locking the product into a dead end. ... If the product team is still validating pricing models, redefining the customer journey, or experimenting with different verticals, any rigid decomposition can turn into friction. Yet avoiding boundaries altogether leads to a “big ball of mud” that is equally hard to evolve. A practical approach is to use provisional boundaries based on current value streams – onboarding, transaction processing, analytics, etc. – and treat them as hypotheses. The challenge is not to find the perfect structure from day one, but to keep those boundaries explicit and adjustable as the business model evolves. ... Startups must make conscious decisions about where they are comfortable being tightly coupled to a provider and where they need portability. That requires viewing cloud services through a business lens: What is strategic IP, what is replaceable, and what is pure commodity? Aligning these categories with architectural choices is a non-trivial design challenge, not just a procurement decision. 


Platform-as-a-Product: Declarative Infrastructure for Developer Velocity

Without centralized guardrails, teams often compensate by over-allocating resources "to be safe", leading to inconsistent environments and unnecessary cloud spend that is only discovered after deployment. ... What is missing is a developer-friendly abstraction that brings these related concerns together. Developers need a way to express intent (not only what infrastructure is required, but also how the application should be built, deployed, configured across environments, secured, and sized) without having to implement the mechanics of each underlying system. From a platform engineering perspective, this abstraction represents the core of an internal developer platform and can be implemented as a lightweight Python-based platform framework. ... The platform comprises several interconnected components. GitLab pipelines coordinate everything, pulling code from repositories, building and unit testing applications (with tests written by developers), checking security, creating cloud infrastructure with Terraform/IaC, and deploying to Kubernetes clusters with Puppet configuration management. The configuration YAML file controls all of this, telling each component what to do. The architecture clearly separates concerns: the CI pipeline handles code building, testing, and vulnerability scanning. CD pipeline handles deployment: creating cloud resources, updating Kubernetes, and configuring environments. 


(Re)introducing Adaptive Business Continuity

Adaptive BC is designed to provide a framework that delivers better outcomes when organizations deal with losses. The result may be a reduction in documentation (something I greatly favor) but that is not a stated goal. ... My experience over the years has led me to conclude that trying to define priorities for the resumption of services is wasted effort. Many activities can take place in parallel, and priorities will change when disasters occur. A perfect example is the governmental lockdowns and health authority mandates that followed the emergence of COVID. The result is that demand for products and services changed drastically, upending previous priorities. Priorities may be defined following adaptive principles, but it is not at all a stated component of the Adaptive framework. ... For a number of reasons, I would like to see the word “plan” used a lot less within our profession. Seeing the word “strategy” in its place would be a step in the right direction. Strategy improvement is not, however, a key outcome of Adaptive BC efforts. There is some benefit to having clearly defined recovery strategies, but strategies only provide benefit to competent and empowered teams armed with the resources they need to carry out the mission. For this reason, I always emphasize the importance of focusing efforts on capabilities and consider plans and strategies as little more than supporting tools for any business continuity program. The improvement of strategies and/or plans is simply not an expected outcome of Adaptive BC work.


Exactly What To Automate With AI In 2026 For Faster Business Growth

Most founders automate the wrong things. They start with the flashy stuff, the complicated tools and fancy dashboards, while ignoring the repetitive tasks quietly draining their hours. But you need faster, cleaner growth by removing friction from the activities that actually grow your business. ... You shouldn't embark on a day's worth of admin tasks every time a new client says yes. It will only slow you down. Make it easy for them to pay, get a receipt, complete an onboarding form, and submit the required information. On your end, have the Google Drive folders, follow-up emails, and team briefings set up without you lifting a finger. Question everything you currently do manually. There is no reason it couldn't be an AI agent handling the sequence. All the tools you pay for already have integrations with each other; You're just not using them. The goal is that you could sign client after client because onboarding takes minutes, not hours. ... AI-generated content is awful when you use it wrong. But that doesn't mean you shouldn't involve AI in your content production process. Content still matters in marketing, whether long-form articles, videos, or social media visuals. You need to be part of the conversation, but only with relevant, authentic material. You cannot outproduce everyone manually, so use automations and retain your human genius for the finishing touches. ... The more your life admin runs on autopilot, the more you free up time and energy for your business. 


What is AI fuzzing? And what tools, threats and challenges generative AI brings

The way traditional fuzzing works is you generate a lot of different inputs to an application in an attempt to crash it. Since every application accepts inputs in different ways, that requires a lot of manual setups. Security testers would then run these tests against their companies’ software and systems to see where they might fail. ... Today, generative artificial intelligence has the potential to automate this previously manual process, coming up with more intelligent tests, and allowing more companies to do more testing of their systems. ... But there’s a third angle involved here. What if, instead of trying to break traditional software, the target was an AI-powered system? This creates unique challenges because AI chatbots are not predictable and can respond differently to the same input at different times. ... AI fuzzing can also help speed up the discovery of vulnerabilities, Roy says. “Traditionally, testing was always a function of how many days and weeks you had to test the system, and how many testers you could throw at the testing,” he says. “With AI, we can expand the scale of the testing.” ... Another use of AI in fuzzing is that it takes more than a set of test cases to fully test an application — you also need a mechanism, a harness, to feed the test cases into the app, and in all the nooks and crannies of the application. “If the fuzzing harness does not have good coverage, then you may not uncover vulnerabilities through your fuzzing,” says Dane Sherrets, staff innovations architect for emerging technologies at HackerOne


CISOs flag gaps in third-party risk management

CISOs rank third-party cyber risk among their highest-impact threats. Vendor relationships touch nearly every core business function, from cloud infrastructure and software development to data processing and AI services. Each added dependency expands the attack surface and increases the number of organizations involved in protecting sensitive systems and data. ... Only a small portion of organizations report visibility across third-, fourth-, and nth-party relationships. Most operate with partial insight limited to direct vendors or a narrow segment of the extended supply chain. CISOs say limited visibility complicates incident response, risk prioritization, and compliance planning. When a breach emerges several layers removed from a known vendor, security teams may struggle to understand exposure, timelines, and downstream impact. ... CISOs report rising regulatory scrutiny tied to third-party cyber risk. Regulatory frameworks place greater expectations on organizations to demonstrate oversight across vendor ecosystems, including indirect relationships. Only a minority of organizations feel ready to meet upcoming requirements without major changes. Most report progress underway, with further work needed to align processes, tooling, and internal coordination. Third-party risk management involves legal, procurement, compliance, and executive leadership alongside security teams. ... At the same time, AI adoption accelerates within vendor risk management itself. 


Anti-fragility – what is it and why should it be the goal for your organisation?

That ability to thrive in the face of disruption must become the basis for improved resilience. Modern organisations shouldn’t strive for survival, but for continual improvement. In the cyber sphere, that is crucial. Threat actors are constantly changing tack, targeting new CVEs, and executing increasingly complicated supply chain attacks. Resilience must therefore move in tandem as an ongoing process of learning and adapting. That is the crux of anti-fragility. It defines systems that thrive and improve from stress, volatility, disorder and shocks, rather than just resisting them. If a security model is only designed to recover, it remains just as vulnerable as before. But an anti-fragile approach actively benefits from each attack, identifying weaknesses, addressing them, and adapting as needed. ... Increasingly, organisations are recognising the value in anti-fragility as a strategy and more will adopt it next year. However, getting there means going beyond regulatory compliance. Compliance lays the foundations from which successful cybersecurity can be built, yet many currently see it as the finished structure. There are several problems with that. Security legislation frequently lags behind the threat landscape, and so the gap between a new threat emerging and a new law coming in to address it can stretch over the course of years. Organisations must therefore understand that compliance doesn’t equal protection. 

Daily Tech Digest - January 14, 2026


Quote for the day:

"To accomplish great things, we must not only act, but also dream, not only plan, but also believe." -- Anatole France



Outsmarting Data Center Outage Risks in 2026

Even the most advanced and well-managed facilities are not immune to disruptions. Recent incidents, such as outages at AWS, Cloudflare, and Microsoft Azure, serve as reminders that no data center can guarantee 100% uptime. This highlights the critical importance of taking proactive steps to mitigate data center outage risks, regardless of how reliable your facility appears to be. ... Overheating events can cause servers to shut down, leading to outages. To prevent an outage, you must detect and address excess heat issues proactively, before they become severe enough to trigger failures. A key consideration in this regard is to monitor data center temperatures granularly – meaning that instead of just deploying sensors that track the overall temperature of the server room, you monitor the temperatures of individual racks and servers. This is important because heat can accumulate in small areas, even if it remains normal across the data center. ... But from the perspective of data center uptime, physical security, which protects against physical attacks, is arguably a more important consideration. Whereas cybersecurity attacks typically target only a handful of servers or workloads, physical attacks can easily disable an entire data center. To this end, it’s critical to invest in multi-layered physical security controls – from the data center perimeter through to locks on individual server cabinets – to protect against intrusion. ... To mitigate outage risks, data center operators must take proactive steps to prevent fires from starting in the first place. 


Deploying AI agents is not your typical software launch - 7 lessons from the trenches

Across the industry, there is agreement that agents require new considerations beyond what we've become accustomed to in traditional software development. In the process, new lessons are being learned. Industry leaders shared some of their own lessons with ZDNET as they moved forward into an agentic AI future. ... Kale urges AI agent proponents to "grant autonomy in proportion to reversibility, not model confidence. Irreversible actions across multiple domains should always have human oversight, regardless of how confident the system appears." Observability is also key, said Kale. "Being able to see how a decision was reached matters as much as the decision itself." ... "AI works well when it has quality data underneath," said Oleg Danyliuk, CEO at Duanex, a marketing agency that built an agent to automate the validation of leads of visitors to its site. "In our example, in order to understand if the lead is interesting for us, we need to get as much data as we can, and the most complex is to get the social network's data, as it is mostly not accessible to scrape. That's why we had to implement several workarounds and get only the public part of the data." ... "AI agents do not succeed on model capability alone," said Martin Bufi, a principal research director at Info-Tech Research Group. His team designed and developed AI agent systems for enterprise-level functions, including financial analysis, compliance validation, and document processing. What helped these projects succeed was the employment of "AgentOps" (agent operations), which focuses on managing the entire agent lifecycle.


What enterprises think about quantum computing

Quantum computers’ qubits are incredibly fragile, so even setting or reading qubits has to be incredibly precise or it messes everything up. Environmental conditions can also mess things up, because qubits can get entangled with the things around them. Qubits can even leak away in the middle of something. So, here we have a technology that most people don’t understand and that is incredibly finicky, and we’re supposed to bet the business on it? How many enterprises would? None, according to the 352 who commented on the topic. How many think their companies will use it eventually? All of them—but they don’t know where or when, as an old song goes. And by the way, quantum theory is older than that song, and we still don’t have a handle on it. ... First and foremost, this isn’t the technology for general business applications. The quantum geeks emphasize that good quantum applications are where you have some incredibly complex algorithm, some math problem, that is simply not solvable using digital computers. Some suggest that it’s best to think of a quantum computer as a kind of analog computer. ... Even where quantum computing can augment digital, you’ll have to watch ROI according to the second point. The cost of quantum computing is currently prohibitive for most applications, even the stuff it’s good for, so you need find applications that have massive benefits, or think of some “quantum as a service” for solving an occasional complex problem.


Beyond the hype: 4 critical misconceptions derailing enterprise AI adoption

Leaders frequently assume AI adoption is purely technological when it represents a fundamental transformation that requires comprehensive change management, governance redesign and cultural evolution. The readiness illusion obscures human and organizational barriers that determine success. ... Leaders frequently assume AI can address every business challenge and guarantee immediate ROI, when empirical evidence demonstrates that AI delivers measurable value only in targeted, well-defined and precise use cases. This expectation reality gap contributes to pilot paralysis, in which companies undertake numerous AI experiments but struggle to scale any to production. ... Executives frequently claim their enterprise data is already clean or assume that collecting more data will ensure AI success — fundamentally misunderstanding that quality, stewardship and relevance matter exponentially more than raw quantity — and misunderstanding that the definition of clean data changes when AI is introduced. ... AI systems are probabilistic and require continuous lifecycle management. MIT research demonstrates manufacturing firms adopting AI frequently experience J-curve trajectories, where initial productivity declines but is then followed by longer-term gains. This is because AI deployment triggers organizational disruption requiring adjustment periods. Companies failing to anticipate this pattern abandon initiatives prematurely. The fallacy manifests in inadequate deployment management, including planning for model monitoring, retraining, governance and adaptation.


Inside the Growing Problem of Identity Sprawl

For years, identity governance relied on a set of assumptions tied closely to human behavior. Employees joined organizations, moved roles and eventually left. Even when access reviews lagged or controls were imperfect, identities persisted long enough to be corrected. That model no longer reflects reality. The difference between human and machine identities isn't just scale. "With human identities, if people are coming into your organizations as employees, you onboard them. They work, and by the time they leave, you can deprovision them," said Haider Iqbal ... "Organizations are using AI today, whether they know it or not, and most organizations don't even know that it's deployed in their environment," said Morey Haber, chief security advisor at BeyondTrust. That lack of awareness is not limited to AI. Many security teams struggle to maintain a reliable inventory of non-human identities, especially when those identities are created dynamically by automation or cloud services. Visibility gaps don't stop access from being granted, but they do prevent teams from confidently enforcing policy. "Without integration … I don't know what it's doing, and then I got to go figure it out. When you unify together, then you have all the AI visibility," Haber said, describing the operational impact of fragmented tooling. ... Modern enterprise environments rely on elevated access for cloud orchestration, application integration and automated workflows. Service accounts and application programming interfaces often require broad permissions to function reliably.


The Timeless Architecture: Enterprise Integration Patterns That Exceed Technology Trends

A strange reality is often encountered by enterprise technology leaders: everything seems to change, yet many things remain the same. New technologies emerge — from COBOL to Java to Python, from mainframes to the cloud — but the fundamental problems persist. Organizations still need to connect incompatible systems, convert data between different formats, maintain reliability when components fail, and scale to meet increasing demand. ... Synchronous request-response communication creates tight coupling and can lead to cascading failures. Asynchronous messaging has appeared across all eras — on mainframes via MQ, in SOA through ESB platforms, in cloud environments via managed messaging services such as SQS and Service Bus, and in modern event-streaming platforms like Kafka. ... A key architectural question is how to coordinate complex processes that span multiple systems. Two primary approaches exist. Orchestration relies on a centralized coordinator to control the workflow, while choreography allows systems to react to events in a decentralized manner. Both approaches existed during the mainframe era and remain relevant in microservices architectures today. Each has advantages: orchestration provides control and visibility, while choreography offers resilience and loose coupling. ... Organizations that treat security as a mere technical afterthought often accumulate significant technical debt. In contrast, enterprises that embed security patterns as foundational architectural elements are better equipped to adapt as technologies evolve.


From distributed monolith to composable architecture on AWS: A modern approach to scalable software

A distributed monolith is a system composed of multiple services or components, deployed independently but tightly coupled through synchronous dependencies such as direct API calls or shared databases. Unlike a true microservices architecture, where services are autonomous and loosely coupled, distributed monoliths share many pitfalls of monoliths ... Composable architecture embraces modularity and loose coupling by treating every component as an independent building block. The focus lies in business alignment and agility rather than just code decomposition. ... Start by analyzing the existing application to find natural business or functional boundaries. Use Domain-Driven Design to define bounded contexts that encapsulate specific business capabilities. ... Refactor the code into separate repositories or modules, each representing a bounded context or microservice. This clear separation supports independent deployment pipelines and ownership. ... Replace direct code or database calls with API calls or events. For example: Use REST or GraphQL APIs via API Gateway. Emit business events via EventBridge or SNS for asynchronous processing. Use SQS for message queuing to handle transient workloads. ... Assign each microservice its own DynamoDB table or data store. Avoid cross-service database joins or queries. Adopt a single-table design in DynamoDB to optimize data retrieval patterns within each service boundary. This approach improves scalability and performance at the data layer.


Firmware scanning time, cost, and where teams run EMBA

Security teams that deal with connected devices often end up running long firmware scans overnight, checking progress in the morning, and trying to explain to colleagues why a single image consumed a workday of compute time. That routine sets the context for a new research paper that examines how the EMBA firmware analysis tool behaves when it runs in different environments. ... Firmware scans often stretch into many hours, especially for medium and large images. The researchers tracked scan durations down to the second and repeated runs to measure consistency. Repeated executions on the same platform produced nearly identical run times and findings. That behavior matters for teams that depend on repeatable results during testing, validation, or research work. It also supports the use of EMBA in environments where scans need to be rerun with the same settings over time. The data also shows that firmware size alone does not explain scan duration. Internal structure, compression, and embedded components influenced how long individual modules ran. Some smaller images triggered lengthy analysis steps, especially during deep inspection stages. ... Nuray said cloud based EMBA deployments fit well into large scale scanning activity. He described cloud execution as a practical option for parallel analysis across many firmware images. Local systems, he added, support detailed investigation where teams need tight control over execution conditions and repeatability. 


'Most Severe AI Vulnerability to Date' Hits ServiceNow

Authentication issues in ServiceNow potentially opened the door for arbitrary attackers to gain full control over the entire platform and access to the various systems connected to it. ... Costello's first major discovery was that ServiceNow shipped the same credential to every third-party service that authenticated to the Virtual Agent application programming interface (API). It was a simple, obvious string — "servicenowexternalagent" — and it allowed him to connect to ServiceNow as legitimate third-party chat apps do. To do anything of significance with the Virtual Agent, though, he had to impersonate a specific user. Costello's second discovery, then, was quite convenient. He found that as far as ServiceNow was concerned, all a user needed to prove their identity was their email address — no password, let alone multifactor authentication (MFA), was required. ... An attacker could use this information to create tickets and manage workflows, but the stakes are now higher, because ServiceNow decided to upgrade its virtual agent: it can now also engage the platform's shiny new "Now Assist" agentic AI technology. ... "It's not just a compromise of the platform and what's in the platform — there may be data from other systems being put onto that platform," he notes, adding, "If you're any reasonably-sized organization, you are absolutely going to have ServiceNow hooked up to all kinds of other systems. So with this exploit, you can also then ... pivot around to Salesforce, or jump to Microsoft, or wherever."


Cybercrime Inc.: When hackers are better organized than IT

Cybercrime has transformed from isolated incidents into an organized industry. The large groups operate according to the same principles as international corporations. They have departments, processes, management levels, and KPIs. They develop software, maintain customer databases, and evaluate their success rates. ... Cybercrime now functions like a service chain. Anyone planning an attack today can purchase all the necessary components — from initial access credentials to leak management. Access brokers sell access to corporate networks. Botnet operators provide computing power for attacks. Developers deliver turnkey exploits tailored to known vulnerabilities. Communication specialists handle contact with the victims. ... What makes cybercrime so dangerous today is not just the technology itself, but the efficiency of its use. Attackers are flexible, networked, and eager to experiment. They test, discard, and improve — in cycles that are almost unimaginable in a corporate setting. Recruitment is handled like in startups. Job offers for developers, social engineers, or language specialists circulate in darknet forums. There are performance bonuses, training, and career paths. The work methods are agile, communication is decentralized, and financial motivation is clearly defined. ... Given this development, absolute security is unattainable. The crucial factor is the ability to quickly regain operational capability after an attack. Cyber ​​resilience describes this competence — not only to survive crises but also to learn from them.

Daily Tech Digest - January 13, 2026


Quote for the day:

"Don't let yesterday take up too much of today." -- Will Rogers



When AI Meets DevOps To Build Self-Healing Systems

Self-healing systems do not just react to events and incidents — they analyse historic data, identify early triggers or symptoms of failures, and act. For example, if a service is known to crash when it runs out of memory, a self-healing system can observe metrics like memory consumption, predict when the service may fail with very low memory, and take action to fix the issue—like restarting the service or allocating more memory—without human intervention. In AIOps, self-healing systems are powered by data science in terms of machine learning models, real-time analytics, and automated workflows. ... Self-healing systems don’t just rely on static rules and manual checks; they utilise real-time data streams and apply pattern and anomaly detection through machine learning to ascertain the state of the environment. A self-healing system is trying to gauge its own health all the time — CPU utilisation, latency, memory, throughput, traffic, security anomalies, etc — to preemptively address an impending failure. The key component of every self-healing system is a cycle that reflects the process followed by intelligent agents: Detect → Diagnose → Act. ... The integration of artificial intelligence and DevOps signifies an important change in the way modern IT systems are built, managed, and evolved. As we have discussed here, AIOps is not just an extension of a type of automation — it is changing the way operations are modelled from reactive to intelligent, self-healing ecosystems.


Building a product roadmap: From high-level vision to concrete plans

A roadmap provides the anchor to keep everyone aligned amid constant flux. Yet many organizations still treat roadmaps as static artifacts — a one-and-done exercise intended to appease executives or investors. That’s a mistake. The most effective roadmaps are living documents evolving with the product and market realities. ... If strategy defines direction, milestones are the engine that keeps the train moving. Too often, teams treat milestones as arbitrary checkpoints or internal deadlines. Done right, these can become powerful tools for motivation, alignment and storytelling. ... The best roadmaps aren’t written by PMs — they’re co-authored by teams. That’s why I advocate for bottom-up collaboration anchored in executive alignment. Before any roadmap offsite, sync with the CEO or leadership team. Understand what they care about and why. If they disagree with priorities, resolve those conflicts early. Then bring that context into a team workshop. During the session, identify technical leads — those trusted voices who can translate into action. Encourage them to pre-think tradeoffs and dependencies before the group session. ... The perfect roadmap doesn’t exist and that’s the point. Remember, the goal isn’t to build a flawless plan, but a resilient one. As President Dwight D. Eisenhower said, “Plans are useless, but planning is indispensable.” ... Vision without execution is hallucination. But execution without vision is chaos. The magic of product leadership lies in balancing both: crafting a roadmap that’s both inspiring and achievable.


Scattered network data impedes automation efforts

As IT organizations mature their network automation strategies, it’s becoming clear that network intent data is an essential foundation. They need reliable documentation of network inventory, IP address space, topology and connectivity, policies, and more. This requirement often kicks off a network source of truth (NSoT) project, which involves network teams discovering, validating, and consolidating disparate data in a tool that can model network intent and provide programmatic access to data for network automation tools and other systems. ... IT leaders do not understand the value of NSoT solutions. The data is already available, although it’s scattered and of dubious quality. Why should we spend money on a product or even extra engineers to consolidate it? “Part of the issue is that we’ve got leadership that are not infrastructure people,” said a network engineer with a global automobile manufacturer. “It’s kind of a heavy lift to get them to buy into it, because they see that applications are running fine over the network. ‘Why do I need to spend money on this is?’ And we tell them that the network is running fine, but there will be failures at some point and it’s worth preventing that.” ... NSoT isn’t a magic bullet for solving the problems IT organizations have with poor network documentation and scattered operational data. Network engineering teams will need to discover, validate, reconcile, and import data from multiple repositories. This process can be challenging and time-consuming. Some of this data will difficult to find. 


What insurers expect from cyber risk in 2026

Cyber insurers are beginning to use LLMs to translate internet scale data into structured inputs for underwriting and portfolio analysis. These applications target specific pain points such as data gaps and processing delays. Broader change across pricing or risk selection remains gradual. ... AI supported workflows begin to reduce repetitive tasks across those stages. Automation supports data entry, document review, and routine verification. Human oversight remains central for judgment based decisions. The research links this shift to measurable operational effects. Fewer manual touches per claim reduce processing time and error rates. Claims teams gain capacity without proportional increases in staffing. ... Age verification and online safety legislation introduce unintended cyber risk. Requirements that reduce online anonymity create high value identity datasets that attract attackers. The research highlights rising exposure to identity based coercion, insider compromise, and extortion. Once personal identity data is leaked, attackers gain leverage that can translate into access to corporate systems. This dynamic supports long term campaigns by organized groups and state aligned actors. ... Data orchestration becomes a core capability. Insurers and reinsurers integrate signals including security posture, threat activity, and loss experience into shared models. Consistent views across teams and regions support portfolio governance. This shift places emphasis on actionability. Data value depends on timing and relevance within workflows rather than volume alone. 


Human + AI Will Define the Future of Work by 2027: Nasscom-Indeed Report

This emerging model of Humans + AI working together is reported as the next phase of transformation, where success depends on how effectively AI will augment human capabilities, empower employees, and align with organizational purpose. The report highlights that the most effective human–AI partnerships are emerging across higher-order activities such as scope definition, system architecture, and data model design. At the same time, more routine and repeatable tasks, including boilerplate code generation and unit test creation, are expected to be increasingly automated by AI over the next two to three years. ... To stay relevant in a Human + AI workplace, the report emphasizes that individuals should build capability, adaptability, and continuous learning. This includes experience with using AI tools (prompting, critical review of output, combining AI speed with human judgment), moving up the value chain (e.g., developers from coding to architecture thinking), building multidisciplinary skills (tech + domain + professional skills), and focusing on outcomes over credentials by creating repositories of work samples showing measurable impact. ... Organizations have already started taking measures to address these challenges. Every seven in ten HR leaders are focusing on upskilling, more than half focusing on modernizing systems. With respect to AI adoption, 79% prioritize internal reskilling as a dominant strategy. 


From vulnerability whack-a-mole to strategic risk operations

“Software bills of materials are just an ingredients list,” he notes. “That’s helpful because the idea is that through transparency we will have a shared understanding. The problem is that they don’t deliver a shared understanding because the expectation of anyone in security who reads the SBOM is the first job they’ll do is run those versions against vulnerability databases.” This creates a predictable problem: security teams receive SBOMs, scan them for vulnerabilities, and generate alerts for every CVE match, regardless of whether those vulnerabilities actually affect the product. ... To make SBOMs truly useful, Kreilein introduces VEX (Vulnerability Exploitability Exchange), an open standards framework that addresses the context problem. VEX provides four status messages: affected, not affected, under investigation, and fixed. “What we want to start doing is using a project called VEX that gives four possible status messages,” Kreilein explains. ... Developers aren’t refusing to patch because they don’t care about security. They’re worried that upgrading a component will break the application. “If my application is brittle and can’t take change, I cannot upgrade to the non-vulnerable version,” Kreilein explains. “If I don’t have effective test automation and integration and unit testing, I can’t guarantee that this upgrade won’t break the application.” This reframing shifts the security conversation from compliance and mandates to engineering fundamentals. Better test coverage, better reference architectures, and better secure-by-design practices become security initiatives.


AI backlash forces a reality check: humans are as important as ever

Companies are now moving beyond the hype and waking up to the consequences of AI slop, underperforming tools, fragmented systems, and wasted budgets, said Brooke Johnson, chief legal officer at Ivanti. “The early rush to adopt AI prioritized speed over strategy, leaving many organizations with little to show for their investments,” Johnson said. Organizations now need to balance AI, workforce empowerment and cybersecurity at the same they’re still formulating strategies. That’s where people come in. ... AI is becoming less a tech problem and more of an adoption hurdle, Depa said. “What we’re seeing now more and more is less of a technology challenge, more of a change management, people, and process challenge — and that’s going to continue as those technologies continue to evolve,” he said. DXC Technology is taking a similar approach, designing tools where human insight, judgment, and collaboration create value that AI can’t do alone, said Dan Gray, vice president of global technical customer operations at the company. ... Companies might have to accept underutilizing some of the AI gains in the near term. AI could help workers complete their tasks in half the time and enjoy a leisurely pace. Alternately, employees might burn out quickly by getting more work. “If you try to lay them off, you don’t have a good workforce left. If you let them be, why are you paying them? So that’s a paradox,” Seth said.


Physical AI is the next frontier - and it's already all around you

Physical AI can be generally defined as AI implemented in hardware that can perceive the world around it and then reason to perform or orchestrate actions. Popular examples including autonomous vehicles and robots -- but robots that utilize AI to perform tasks have existed for decades. So what's the difference? ... Saxena adds that while humanoid robots will be useful in instances where humans don't want to perform a task, either because it is too tedious or too risky, they will not replace humans. That's where AI wearables, such as smart glasses, play an important role, as they can augment human capabilities. But beyond that, AI wearables might actually be able to feed back into other physical AI devices, such as robots, by providing a high-quality dataset based on real-life perspectives and examples. "Why are LLMs so great? Because there is a ton of data on the internet, for a lot of the contextual information and whatnot, but physical data does not exist," said Saxena. ... Given the privacy concerns that may come from having your everyday data used to train robots, Saxena highlighted that the data from your wearables should always be kept at the highest level of privacy. As a result, the data -- which should already be anonymized by the wearable company -- could be very helpful in training robots. That robot can then create more data, resulting in a healthy ecosystem. "This sharing of context, this sharing of AI between that robot and the wearable AI devices that you have around you is, I think, the benefit that you are going to be able to accrue," added Asghar.


Unlocking the Power of Geospatial Artificial Intelligence (GeoAI)

GeoAI is more than sophisticated map analytics. It is a strategic technology that blends AI with the physical world, allowing tech experts to see, understand, and act on patterns that were previously invisible. From planning sustainable cities to protecting wildlife, it’s helping experts tackle significant challenges with precision and speed. As the world generates more location-based data every day, GeoAI is becoming a must-have tool. It’s not just tech – it’s a way to make the world work better. ... To make it simpler. Machine learning spots trends, computer vision interprets images, GIS organizes it all, and knowledge graphs tie it together. The result? GeoAI can take a chaotic pile of data and deliver clear answers, like telling a city where to build a new park or warning about a wildfire risk. It’s a powerhouse that’s making location-based decisions faster and smarter. In all, GeoAI is transforming the speed at which we extract meaning from complex datasets, thereby enabling us to address the Earth’s most pressing challenges. ... Though powerful, GeoAI is not without challenges. Effective implementation requires careful attention to data privacy, technical infrastructure, and organizational change management. ... Leaders who take GeoAI seriously stand to gain more than just incremental improvements. With the right systems in place, they can respond faster, make smarter decisions, and get better results from every field team in the network. 


For application security: SCA, SAST, DAST and MAST. What next?

If you think SAST and SCA are enough, you’re already behind. The future of app security is posture, provenance and proof, not alerts. ... Posture is the ‘what.’ Provenance is the ‘how’. The SLSA framework gives us a shared vocabulary and verifiable controls to prove that artifacts were built by hardened, tamper‑resistant pipelines with signed attestations that downstream consumers can trust. When I insist on SLSA Level 2 for most services and Level 3 for critical paths, I am not chasing compliance theater; I am buying integrity that survives audit and incident. Proof is where SBOMs finally grow up. Binding SBOM generation to the build that emits the deployable bits, signing them and validating at deploy time moves SBOMs from “ingredient lists” to enforceable controls. The CNCF TAG‑Security best practices v2 paper is my practical map, personas, VEX for exploitability, cryptographic verification to ensure tests actually ran, and prescriptive guidance for cloud‑native factories. ... Among the nexts, AI is the most mercurial. NIST’s final 2025 guidance on adversarial ML split threats across PredAI and GenAI and called out prompt injection in direct and indirect form as the dominant exploit in agentic systems where trusted instructions co mingle with untrusted data. The U.S. AI Safety Institute published work on agent hijacking evaluations, which I treat as required red‑team reading for anyone delegating actions to tools.