Showing posts with label disruption. Show all posts
Showing posts with label disruption. Show all posts

Daily Tech Digest - February 09, 2026


Quote for the day:

"Leaders who make their teams successful are followed even through the hardest journeys." -- Gordon Tredgold



Agentic AI upends SaaS models & sparks valuation shock

The Software-as-a-Service market is moving away from seat-based licensing as agentic artificial intelligence tools change how companies build and purchase business software, according to analysts and industry executives. Investors have already reacted to the shift. A broad sell-off in software stocks followed recent advances in agentic technology, raising questions regarding the durability of current business models. Concerns persist that traditional revenue streams may be at risk as autonomous systems perform increasing volumes of work with fewer human users. ... Not every vendor is well positioned for the transition. Industry observers are using the term "zombie SaaS" for companies that raised large rounds at peak valuations from 2020 to 2022 and now trade or transact below the total capital invested. These businesses often face a mismatch between historical expectations and current demand. They can struggle to raise new funding and may lack the growth rate needed to justify earlier valuations. Meanwhile, newer entrants can build competing products faster and at lower cost, increasing pressure on incumbents with larger cost structures. ... AI is also reshaping procurement decisions. Some companies are shifting toward internal tools as non-technical teams gain access to systems that generate software from natural-language prompts and templates. Industry discussion points to Ramp building internal revenue tools and AI agents in place of third-party software. 


Software developers: Prime cyber targets and a rising risk vector for CISOs

Attackers are increasingly targeting the tools, access, and trusted channels used by software developers rather than simply exploiting application bugs. The threats blend technical compromise — malicious packages, development pipeline abuse, etc. — with social engineering and AI-driven attacks. ... The tokens, API keys, cloud credentials, and CI/CD secrets held by software developers unlock far broader access than a typical office user account, making software engineers a prime target for cybercriminals. “They [developers] hold the keys to the kingdom, privileged access to source code and cloud infrastructure, making them a high-value target,” Wood adds. ... Attackers aren’t just looking for flaws in code — they’re looking for access to software development environments. Common security shortcomings, including overprivileged service accounts, long-lived tokens, and misconfigured pipelines, offer a ready means for illicit entry into sensitive software development environments. “Improperly stored access credentials are low-hanging fruit for even the most amateur of threat actors,” says Crystal Morin, senior cybersecurity strategist at cloud-native security and observability vendor Sysdig. ... AI-assisted development and “vibe coding” are increasing exposure to risk, especially because such code is often generated quickly without adequate testing, documentation, or traceability.


How network modernization enables AI success and quantum readiness

In essence, inadequate networks limit the ability of AI “blood” to nourish the body of an organization — weakening it and stifling its growth. Many enterprise networks developed incrementally over time, with successive layers of technology implemented over time. Mergers, divestitures, and one-off projects to solve immediate problems have left organizations with a patchwork of architectures, vendors and configurations. ... As AI traffic increases across data centers, clouds, and the edge, blind spots multiply. Once-manageable technical debt becomes an active security liability, expanding the attack surface and undermining Zero Trust initiatives as AI-driven traffic increases. ... Quantum computers could break today’s encryption standards, exposing sensitive financial, healthcare and operational data. Worse, attackers are already engaging in “harvest now, decrypt later” strategies — stealing encrypted data today to exploit tomorrow. The relevance to networking and AI issues is straightforward. Preparing for the challenges (and opportunities) of quantum computing will be an incremental, multi-year project that needs to start now. Enterprise IT infrastructures must be able to adapt and scale to quantum computing developments as they evolve. Companies will need to be able to “skate to where the puck will be,” and then skate again! While becoming quantum-safe may seem daunting, organizations don’t have to do it all at once. 


Rethinking next-generation OT SOC as IT/OT convergence reshapes industrial cyber defense

Clear gains from next-generation OT SOC innovation emerge across real-world applications, such as OT-aware detection, AI-assisted triage, and distributed SOC models designed to reflect the day-to-day realities of operating critical infrastructure. ... The line between what is OT and what is IT is blurred. Each customer, scenario, and request proposal shows a unique fingerprint of architectural, process, and industry-related concerns. Our OT SOC development program integrated industrial network sensors with enterprise SOC, enabling holistic monitoring of plants and offices together. ... Risk is no longer discussed purely from a cyber perspective, but in terms of operational impact, safety, and reliability, which is more consequence-driven. When convergence is implemented securely, alerts are no longer investigated in isolation; identity, remote access activity, asset criticality, and process context are correlated together. ... From a practical standpoint, Mashirova said that automation delivers the most operational value in enrichment, correlation, prioritization, and workflow orchestration. “Automating asset context, vulnerability risk prioritization with remediation recommendations, alert deduplication, and escalation logic dramatically improves analyst efficiency without directly impacting the industrial process. AI agents can act as SOC assistants by correlating large volumes of data and providing decision support to analysts.”


Shai-hulud: The Hidden Cost of Supply Chain Attacks

In recent months, a somewhat novel supply chain threat has emerged against the open source community; attackers are unleashing self-propagating malware on component libraries and targeting downstream victims with infostealers. The most famous recent example of this is Shai-hulud, a worm targeting NPM projects that would take hold when a victim downloads a poisoned component. Once on a victim machine, the malware used its access to infect components that the victim maintains before self-publishing poisoned versions. ... Another consideration is long-term, lasting damage from these incidents. Sygnia's Kidron explains that the impact of a compromise like credential theft happens on a wider time scale. If the issue has not been adequately contained, attackers can sell access or use it for follow-on activity later. "In practice, damage unfolds across time frames. Immediately — within hours to the first few days after exposure, the primary risk is credential exposure: these campaigns are designed to execute inside developer and CI/CD paths where tokens and secrets are accessible," he says. "When those secrets leak, the downstream harm is not abstract — the attacker can use them (or sell them) to authenticate as the victim and access private repositories, pull data, tamper with code, trigger builds, publish packages, access cloud resources, or perform actions “on behalf” of legitimate identities." 


United Airlines CISO on building resilience when disruption is inevitable

Modernization in aviation is less about speed and more about precision. Every change must measurably improve safety, reliability, or resilience. Cybersecurity must respect that bar. ... Cyber risk is assessed in terms of how it affects the ability to move aircraft, crew, and passengers safely and on time. It also means cybersecurity leaders must understand the business end-to-end. You cannot protect an airline effectively without understanding flight operations, maintenance, weather, crew scheduling, and regulatory constraints. Cybersecurity becomes an enabler of safe operations, not a separate technical function. ... Risk assessment goes beyond vendor questionnaires. It includes scenario analysis, operational impact modeling, and close coordination with partners, regulators, and industry groups. Information sharing is essential, because early awareness often matters more than perfect control. Ultimately, we assume some disruptions will originate externally. The goal is to detect them quickly, understand their operational impact, and adapt without compromising safety. Resilience and coordination are just as important as contractual controls. ... Speed matters, but clarity matters more. We also plan extensively in advance. You cannot improvise under pressure when aircraft and passengers are involved. Clear playbooks, rehearsals, and defined decision authorities allow teams to act decisively while staying aligned with safety principles.


Securing IoT devices: why passwords are not enough

Traditional passwords are often not secure enough for technological devices or systems. Many consumers use the default password that comes with the system rather than changing it to a more secure one. When people update their passwords, they often choose weak ones that are easy for cyberattackers to crack. The volume of IoT devices makes manual password management inefficient and risky. A primary threat is the lack of encryption as data travels between networks. When multiple devices are connected, encryption is key to protecting information. Another threat is poor network segmentation, which means connected devices are misconfigured or less secure. ... Adopting a zero-trust methodology is a better cybersecurity measure than traditional password-based systems. IoT devices can still require a password, but the system may ask for additional information to verify the user’s authorization. Users can set up passkeys, security questions or other methods as the next step after entering a password. ... AI can be used both offensively and defensively in cybersecurity for IoT devices. Hackers use AI to launch advanced attacks, but users can also implement AI to detect suspicious behaviour and address threats. Consumers can purchase AI security systems to safeguard their IoT devices beyond passwords, but they must remain vigilant and continuously monitor their usage to prevent cyberattackers from infiltrating them.


Creating a Top-Down and Bottom-Up Grounded Capability Model

A grounded capability model is a complete and stable set of these capabilities, structured in levels from level 1 to sometimes level 4 so senior leaders, middle managers, architects, and digital transformation managers can see the business as an integrated whole. The “grounded” part matters: it means the model reflects strategy and business design, not the quirks of today’s org chart or application portfolio. ... Business Architecture Info emphasizes that a grounded capability model is best built by combining top-down strategic direction with bottom-up operational reality. The top-down view ensures the model is aligned to the business plan and strategic goals, while the bottom-up view ensures it is validated against real value streams, objectives, and subject-matter expertise. ... Top-down capability modeling needs the right stakeholders and the right strategic inputs. On the stakeholder side, senior leaders are essential because they own direction, priorities, and the definition of “what good looks like.” The EA team, enterprise architects and business architects, translates that direction into a structured capability view. ... Bottom-up capability modeling grounds the model in delivery and operational truth. It relies heavily on middle managers, subject matter experts, and business experts. In other words, people who know how value is produced, where friction exists, and what “enablement” really takes. The EA team remains a key facilitator and modeler, but validation and discovery come from the business.


Secure The Path, Not The Chokepoint

The argument here is simple: baseline security policy should be enforced along the path where packets already travel. Programmable data planes, particularly P4 on programmable switching targets, make it possible to enforce meaningful guardrails at line rate, close to the workload, without redesigning the network into a set of security detours. ... When enforcement is concentrated on a few devices, the architecture depends on traffic detours or assumptions about where traffic flows. That creates three practical problems: First, important east west traffic may never traverse an inspection point. Second, response actions often depend on where a firewall sits rather than where the attacker is operating. Third, changes become slow and risky because every new workload pattern becomes another exception. ... A fabric first model succeeds when it focuses on controls that are simple, universal, and have a high impact. ... A fabric first approach does not remove the need for firewalls. Deep application inspection, proxy functions, content controls, and specialized policy workflows still make sense where rich context exists and where inspection overhead is acceptable. The shift is about default placement. Baseline guardrails and rapid containment belong in the fabric. ... A small set of metrics usually tells the story clearly: time from detection to enforced containment, reduction in unintended internal connection attempts, and time to produce a credible incident narrative during review.


Banks Face Dual Authentication Crisis From AI Agents

Traditional authentication relies upon point-in-time verification like MFA and a password, after which access is granted. Over the years, banks have analyzed human spending patterns. But AI agents purchasing around the clock and seeking optimal deals have rendered that model obsolete. "With autonomous agents transacting on behalf of users, the distinction between legitimate and fraudulent activity is blurred, and a single compromised identity could trigger automated losses at scale," said Ajay Patel, head of agentic commerce at Prove. ... But before banks can address the authentication problem, they need to fix their data infrastructure, said Carey Ransom, managing director at BankTech Ventures. AI agents need clean, contextually appropriate data, banks don't yet have standardized ways to provide it. So, when mistakes occur, who is at fault, and who is liable for making things right? When AI agents can spawn sub-agents that delegate tasks to other AI systems throughout a transaction chain, the liability question gets murky. ... Layered authentication that balances security with the speed will reduce agentic AI valuable risks, Ransom said. "Variant transaction requests might require a new layer or type of authentication to ensure it is legitimate and reflecting the desired activity," he said. "Checks and balances will be a prevailing approach to protect both sides, while still enabling the autonomy and efficiency the market desires."

Daily Tech Digest - January 02, 2026


Quote for the day:

“If your ship doesn’t come in, swim out to meet it!” -- Jonathan Winters



Delivering resilience and continuity for AI

Think of it as technical debt, suggests IDC group VP Daniel Saroff as most enterprises underestimate the strain AI puts on connectivity and compute. Siloed infrastructure won’t deliver what AI needs and CIOs need to think about these and other things in a more integrated way to make AI successful. “You have to look at your GPU infrastructure, bandwidth, network availability, and connectivity between respective applications,” he says. “If you have environments not set up for highly transactional, GPU-intensive environments, you’re going to have a problem,” Saroff warns. “And having very fragmented infrastructure means you need to pull data and integrate multiple different systems, especially when you start to look at agentic AI.” ... Making AI scale will almost certainly mean taking a hard look at your data architecture. Every database adds features for AI. And lakehouses promise you can bring operational data and analytics together without affecting the SLAs of production workloads. Or you can go further with data platforms like Azure Fabric that bring in streaming and time series data to use for AI applications. If you’ve already tried different approaches, you likely need to rearchitect your data layer to get away from the operational sprawl of fragmented microservices, where every data hand-off between separate vector stores, graph databases, and document silos introduces latency and governance gaps. Too many points of failure make it hard to deliver high availability guarantees.


Technological Disruption: Strategic Inflection Points From 2026 - 2036

From a defensive standpoint, AI-driven security solutions will provide continuous surveillance, automated remediation, and predictive threat modeling at a scale unattainable by human analysts. Simultaneously, attackers will utilize AI to create polymorphic malware, execute influence operations, and exploit holes at machine speed. The outcome will be an environment where cyber war progresses more rapidly than conventional command-and-control systems can regulate. As we approach 2036, the primary concern will be AI governance rather than AI capacity. ... From 2026 to 2030, enterprises will increasingly recognize that cryptographic agility is vital. The move to post-quantum cryptography standards means that old systems, especially those in critical infrastructure, financial services, and government networks, need to be fully inventoried, evaluated, and upgraded. By the early 2030s, quantum innovation will transcend cryptography, impacting optimization, materials science, logistics, and national security applications. ... In the forthcoming decade, supply chain security will transition from compliance-based evaluations to ongoing risk intelligence. Transparency methods, including software bills of materials, hardware traceability, and real-time vendor risk assessment, will evolve into standard expectations rather than just best practices. Supply chain resilience will strategically impact national competitiveness.


True agentic AI is years away - here's why and how we get there

We're not there yet. We're not even close. Today's bots are limited to chat interactions and often fail outside that narrow operating context. For example, what Microsoft calls an "agent" in the Microsoft 365 productivity suite, probably the best-known instance of an agent, is simply a way to automatically generate a Word document. Market data shows that agents haven't taken off. ... Simple automations can certainly bring about benefits, such as assisting a call center operator or rapidly handling numerous invoices. However, a growing body of scholarly and technical reports has highlighted the limitations of today's agents, which have failed to advance beyond these basic automations. ... Before agents can live up to the "fully autonomous code" hype of Microsoft and others, they must overcome two primary technological shortcomings. Ongoing research across the industry is focused on these two challenges: Developing a reinforcement learning approach to designing agents; and Re-engineering AI's use of memory -- not just memory chips such as DRAM, but the whole phenomenon of storing and retrieving information. Reinforcement learning, which has been around for decades, has demonstrated striking results in enabling AI to carry out tasks over a very long time horizon. ... On the horizon looms a significant shift in reinforcement learning itself, which could be a boon or further complicate matters. Can AI do a better job of designing reinforcement learning than humans?


Why Developer Experience Matters More Than Ever in Banking

Effective AI assistance, in fact, meets developers where they are—or where they work. Some prefer a command-line interface, others live inside an IDE, and still others rely heavily on sample code and language-specific SDKs. A strong DX strategy supports all of these modes, using AI to surface accurate, context-aware guidance without forcing developers into a single workflow. When AI reinforces clarity, it becomes a force multiplier. ... As AI-assisted development becomes more common, the quality of documentation takes on new importance. Because it is no longer read only by humans, documentation increasingly serves as the knowledge base that enables AI agents that help developers search, generate, and validate code. When documentation is vague or poorly structured, it introduces confusion, often in ways that actively undermine developer confidence. ... In highly regulated environments, developers want, and expect, guardrails—but not at the expense of speed and consistency. One of the most effective ways to balance those demands is by codifying business rules and compliance requirements directly into the platform, rather than relying on manual, human-driven review at key milestones. Talluri describes this approach as “policy as code”: embedding rules, validations, and regional requirements into the system so developers receive immediate, actionable prompts and feedback as they work. ... The business case for exceptional developer experience rests on a simple truth: trust drives productivity.


AI-powered testing for strategic leadership

Nearly half of teams still release untested code due to time pressure, creating fragile systems and widening risk exposure. Legacy architectures further compound this, making modernisation difficult and slowing down automated validation,” he said. AI-generated code also introduces new vulnerabilities. Without strong validation pipelines, testing quickly becomes the bottleneck of transformation. Developers often view testing as tedious, and with modern codebases spanning multiple interconnected applications, the challenge intensifies. At the same time, misalignment between leadership and engineering teams leads to unclear priorities and rushed decisions. While the pace of development already feels fast, it is only set to accelerate. To overcome barriers, CIOs can adopt model-based, codeless AI testing that reduces dependence on fragile code-level automation and cuts ongoing maintenance. This approach can reduce manual effort by 80%–90% and enables non-technical experts to participate through natural-language and visual test generation. For Wong, strong governance is vital. This entails domain-trained, testing-specific AI that avoids hallucinations and supports safe, transparent validation. Instead of becoming autonomous, AI can act as a co-pilot working alongside developers. “By aligning teams, modernising toolchains, and embedding guardrails, CIOs can shift from reactive firefighting to proactive, AI-driven quality engineering,” he said.


The Architect’s Dilemma: Choose a Proven Path or Pave Your Own Way?

Platforms and frameworks are like paved roads that may help a team progress faster on their journey, with well-defined "exit ramps" or extension points where a team can extend the platform to meet their needs, but they come with side-effects that may make them undesirable. Teams need to decide when, if ever, they need to leave the path others have paved and find their own way by developing extensions to the platform or framework, or by developing new platforms or frameworks. The challenge teams face when they use platforms or frameworks as the basis for their software architectures is to choose the "paved road" (platform or framework) that gets them closest to their desired destination with minimal diversions or new construction. ... Many platform decisions are innocuous and can be accepted and ignored when they don’t affect the QARs that the team needs to meet. The only way to know whether the decisions are harmful is through experiments that expose when the platform is failing to meet the goals of the system. Since the decisions made by the platform developers are often undocumented and/or unknowable, it’s imperative that teams be able to test their system (including the platforms on which they are built) to make sure that their architectural goals (i.e. QARs) are being met. ... Using the "paved road" metaphor, the LLM provides a proven path but it does not take the team where they need to go. When this happens, they have no choice but to either start extending the platform (if they can), finding a different platform, or building their own platform.


Supply chains, AI, and the cloud: The biggest failures (and one success) of 2025

By compromising a single target with a large number of downstream users—say a cloud service or maintainers or developers of widely used open source or proprietary software—attackers can infect potentially millions of the target’s downstream users. ... Another significant security story cast both Meta and Yandex as the villains. Both companies were caught exploiting an Android weakness that allowed them to de-anonymize visitors so years of their browsing histories could be tracked. The covert tracking—implemented in the Meta Pixel and Yandex Metrica trackers—allowed Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. ... The outage with the biggest impact came in October, when a single point of failure inside Amazon’s sprawling network took out vital services worldwide. It lasted 15 hours and 32 minutes. The root cause that kicked off a chain of events was a software bug in the software that monitors the stability of load balances by, among other things, periodically creating new DNS configurations for endpoints within the Amazon Web Services network. A race condition—a type of bug that makes a process dependent on the timing or sequence of events that are variable and outside the developers’ control—caused a key component inside the network to experience “unusually high delays needing to retry its update on several of the DNS endpoint,” Amazon said in a post-mortem.


The Evolving Cybersecurity Challenge for Critical Infrastructure

Convergence between OT, IT and the cloud is providing cybercriminal groups with the opportunity to target critical infrastructure. Operators, and regulators, are wrestling with new technology and new manufacturers, outside the traditional OT/ICS supply chain. “With the geopolitical tensions and the way that the world will look in maybe a few years, they're starting to scratch their heads and think, ‘okay, is it secure? Is it safe? How was it developed? Is there any remote access? How is it being configured?’ There are things that are being done now, that will have an effect in a few years’ time,” cautioned Daniel dos Santos, head of security research at Forescout's Vedere Labs. Given the lifespans of operational technology, installing insecure equipment now can have long-term consequences. Meanwhile, CISOs face dealing with older hardware that was not designed for modern threats. Even where vendors release patches, CNI operators do not always apply them, either because of concerns about business interruption, or a lack of visibility. ... Threats to CNI are not likely to abate in 2026. Legislators are putting more emphasis on cyber resilience and directives, such as the EU’s Cyber Resilience Act, will improve the security of connected devices. But these upgrades take time. “Threats from criminal groups continue to grow exponentially,” said Phil Tonkin, CTO at OT security specialists Dragos


The changing role of the MSP: What does this mean for security?

MSPs hold a unique position within the IT ecosystem, as they are often responsible for managing and supporting the IT infrastructures, cloud services, and cybersecurity of many different organizations. These trusted partners often have privileged access to the inner workings of the organizations they support, including access to the critical systems, sensitive information, and intellectual property of their clients. ... Research shows that over half of MSP leaders globally believe that their customers are at more risk today than this time last year when it comes to cyber threats, with AI-based attack vectors, ransomware/malware, and insider threats the most commonly faced threats. As a result of this uptick in threats, more organizations than ever are leaning on MSPs for cyber support. In fact, in 2025, 84% of MSPs managed either their clients’ cyber infrastructure or their cyber and IT estates combined. This increased significantly, from 64% the previous year. What this shows is that SMEs are realising that they cannot handle cybersecurity alone, turning to MSPs for additional help. Cybersecurity is no longer an optional extra or add-on; it’s becoming a core, expected service for MSPs. MSP leaders are transitioning from general IT support to becoming essential cybersecurity guardians. ... MSPs that adapt by investing in specialized cybersecurity expertise, advanced technologies, and a proactive security posture will thrive, becoming indispensable partners to businesses navigating the complex world of cyber risk. 


What’s next for Azure containers?

Until now, even though Azure has had deep eBPF support, you’ve had to bring your own eBPF tools and manage them yourself, which does require expertise to run at scale. Not everyone is a Kubernetes platform engineer, and with tools like AKS providing a managed environment for cloud-native applications, having a managed eBPF environment is an important upgrade. The new Azure Managed Cilium tool provides a quick way of getting that benefit in your applications, using it for host routing and significantly reducing the overhead that comes with iptables-based networking. ... Declarative policies let Azure lock down container features to reduce the risk of compromised container images affecting other users. At the same time, it’s working to secure the underlying host OS, which for ACI is Linux. SELinux allows Microsoft to lock that image down, providing an immutable host OS. However, those SELinux policies don’t cross the boundary into containers, leaving their userspace vulnerable. ... Having a policy-driven approach to security helps quickly remediate issues. If, say, a common container layer has a vulnerability, you can build and verify a patch layer and deploy it quickly. There’s no need to patch everything in the container, only the relevant components. Microsoft has been doing this for OS features for some time now as part of its internal Project Copacetic, and it’s extending the process to common runtimes and libraries, building patches with updated packages for tools like Python.

Daily Tech Digest - June 29, 2025


Quote for the day:

“Great minds discuss ideas; average minds discuss events; small minds discuss people.” -- Eleanor Roosevelt


Who Owns End-of-Life Data?

Enterprises have never been more focused on data. What happens at the end of that data's life? Who is responsible when it's no longer needed? Environmental concerns are mounting as well. A Nature study warns that AI alone could generate up to 5 million metric tons of e-waste by 2030. A study from researchers at Cambridge University and the Chinese Academy of Sciences said top reason enterprises dispose of e-waste rather than recycling computers is the cost. E-waste can contain metals, including copper, gold, silver aluminum and rare earth elements, but proper handling is expensive. Data security is a concern as well as breach proofing doesn't get better than destroying equipment. ... End-of-life data management may sit squarely in the realm of IT, but it increasingly pulls in compliance, risk and ESG teams, the report said. Driven by rising global regulations and escalating concerns over data leaks and breaches, C-level involvement at every stage signals that end-of-life data decisions are being treated as strategically vital - not simply handed off. Consistent IT participation also suggests organizations are well-positioned to select and deploy solutions that work with their existing tech stack. That said, shared responsibility doesn't guarantee seamless execution. Multiple stakeholders can lead to gaps unless underpinned by strong, well-communicated policies, the report said.


How AI is Disrupting the Data Center Software Stack

Over the years, there have been many major shifts in IT infrastructure – from the mainframe to the minicomputer to distributed Windows boxes to virtualization, the cloud, containers, and now AI and GenAI workloads. Each time, the software stack seems to get torn apart. What can we expect with GenAI? ... Galabov expects severe disruption in the years ahead on a couple of fronts. Take coding, for example. In the past, anyone wanting a new industry-specific application for their business might pay five figures for development, even if they went to a low-cost region like Turkey. For homegrown software development, the price tag would be much higher. Now, an LLM can be used to develop such an application for you. GenAI tools have been designed explicitly to enhance and automate several elements of the software development process. ... Many enterprises will be forced to face the reality that their systems are fundamentally legacy platforms that are unable to keep pace with modern AI demands. Their only course is to commit to modernization efforts. Their speed and degree of investment are likely to determine their relevance and competitive positioning in a rapidly evolving market. Kleyman believes that the most immediate pressure will fall on data-intensive, analytics-driven platforms such as CRM and business intelligence (BI). 


AI Improves at Improving Itself Using an Evolutionary Trick

The best SWE-bench agent was not as good as the best agent designed by expert humans, which currently scores about 70 percent, but it was generated automatically, and maybe with enough time and computation an agent could evolve beyond human expertise. The study is a “big step forward” as a proof of concept for recursive self-improvement, said Zhengyao Jiang, a cofounder of Weco AI, a platform that automates code improvement. Jiang, who was not involved in the study, said the approach could made further progress if it modified the underlying LLM, or even the chip architecture. DGMs can theoretically score agents simultaneously on coding benchmarks and also specific applications, such as drug design, so they’d get better at getting better at designing drugs. Zhang said she’d like to combine a DGM with AlphaEvolve. ... One concern with both evolutionary search and self-improving systems—and especially their combination, as in DGM—is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned.


Data center costs surge up to 18% as enterprises face two-year capacity drought

Smart enterprises are adapting with creative strategies. CBRE’s Magazine emphasizes “aggressive and long-term planning,” suggesting enterprises extend capacity forecasts to five or 10 years, and initiate discussions with providers much earlier than before. Geographic diversification has become essential. While major hubs price out enterprises, smaller markets such as São Paulo saw pricing drops of as much as 20.8%, while prices in Santiago fell 13.7% due to shifting supply dynamics. Magazine recommended “flexibility in location as key, exploring less-constrained Tier 2 or Tier 3 markets or diversifying workloads across multiple regions.” For Gogia, “Tier-2 markets like Des Moines, Columbus, and Richmond are now more than overflow zones, they’re strategic growth anchors.” Three shifts have elevated these markets: maturing fiber grids, direct renewable power access, and hyperscaler-led cluster formation. “AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance.


Fintech’s AI Obsession Is Useless Without Culture, Clarity and Control

what does responsible AI actually mean in a fintech context? According to PwC’s 2024 Responsible AI Survey, it encompasses practices that ensure fairness, transparency, accountability and governance throughout the AI lifecycle. It’s not just about reducing model bias — it’s about embedding human oversight, securing data, ensuring explainability and aligning outputs with brand and compliance standards. In financial services, these aren’t "nice-to-haves" — they’re essential for scaling AI safely and effectively. Financial marketing is governed by strict regulations and AI-generated content can create brand and legal risks. ... To move AI adoption forward responsibly, start small. Low-risk, high-reward use cases let teams build confidence and earn trust from compliance and legal stakeholders. Deloitte’s 2024 AI outlook recommends beginning with internal applications that use non-critical data — avoiding sensitive inputs like PII — and maintaining human oversight throughout. ... As BCG highlights, AI leaders devote 70% of their effort to people and process — not just technology. Create a cross-functional AI working group with stakeholders from compliance, legal, IT and data science. This group should define what data AI tools can access, how outputs are reviewed and how risks are assessed.


Is Microsoft’s new Mu for you?

Mu uses a transformer encoder-decoder design, which means it splits the work into two parts. The encoder takes your words and turns them into a compressed form. The decoder takes that form and produces the correct command or answer. This design is more efficient than older models, especially for tasks such as changing settings. Mu has 32 encoder layers and 12 decoder layers, a setup chosen to fit the NPU’s memory and speed limits. The model utilizes rotary positional embeddings to maintain word order, dual-layer normalization to maintain stability, and grouped-query attention to use memory more efficiently. ... Mu is truly groundbreaking because it is the first SLM built to let users control system settings using natural language, running entirely on a mainstream shipping device. Apple’s iPhones, iPads, and Macs all have a Neural Engine NPU and run on-device AI for features like Siri and Apple Intelligence. But Apple does not have a small language model as deeply integrated with system settings as Mu. Siri and Apple Intelligence can change some settings, but not with the same range or flexibility. ... By processing data directly on the device, Mu keeps personal information private and responds instantly. This shift also makes it easier to comply with privacy laws in places like Europe and the US since no data leaves your computer.


Is It a Good Time to Be a Software Engineer?

AI may be rewriting the rules of software development, but it hasn’t erased the thrill of being a programmer. If anything, the machines have revitalised the joy of coding. New tools make it possible to code in natural language, ship prototypes in hours, and bypass tedious setup work. From solo developers to students, the process may feel more immediate or rewarding. Yet, this sense of optimism exists alongside an undercurrent of anxiety. As large language models (LLMs) begin to automate vast swathes of development, some have begun to wonder if software engineering is still a career worth betting on. ... Meanwhile, Logan Thorneloe, a software engineer at Google, sees this as a golden era for developers. “Right now is the absolute best time to be a software engineer,” he wrote on LinkedIn. He points out “development velocity” as the reason. Thorneleo believes AI is accelerating workflows, shrinking prototype cycles from months to days, and giving developers unprecedented speed. Companies that adapt to this shift will win, not by eliminating engineers, but by empowering them. More than speed, there’s also a rediscovered sense of fun. Programmers who once wrestled with broken documentation and endless boilerplate are rediscovering the creative satisfaction that first drew them to the field. 


Dumping mainframes for cloud can be a costly mistake

Despite industry hype, mainframes are not going anywhere. They quietly support the backbone of our largest banks, governments, and insurance companies. Their reliability, security, and capacity for massive transactions give mainframes an advantage that most public cloud platforms simply can’t match for certain workloads. ... At the core of this conversation is culture. An innovative IT organization doesn’t pursue technology for its own sake. Instead, it encourages teams to be open-minded, pragmatic, and collaborative. Mainframe engineers have a seat at the architecture table alongside cloud architects, data scientists, and developers. When there’s mutual respect, great ideas flourish. When legacy teams are sidelined, valuable institutional knowledge and operational stability are jeopardized. A cloud-first mantra must be replaced by a philosophy of “we choose the right tool for the job.” The financial institution in our opening story learned this the hard way. They had to overcome their bias and reconnect with their mainframe experts to avoid further costly missteps. It’s time to retire the “legacy versus modern” conflict and recognize that any technology’s true value lies in how effectively it serves business goals. Mainframes are part of a hybrid future, evolving alongside the cloud rather than being replaced by it. 


Why Modern Data Archiving Is Key to a Scalable Data Strategy

Organizations are quickly learning they can’t simply throw all data, new and old, at an AI strategy; instead, it needs to be accurate, accessible, and, of course, cost-effective. Without these requirements in place, it’s far from certain AI-powered tools can deliver the kind of insight and reliability businesses need. As part of the various data management processes involved, archiving has taken on a new level of importance. ... For organizations that need to migrate data, for example, archiving is used to identify which essential datasets, while enabling users to offload inactive data in the most cost-effective way. This kind of win-win can also be applied to cloud resources, where moving data to the most appropriate service can potentially deliver significant savings. Again, this contrasts with tiering systems and NAS gateways, which rely on global file systems to provide cloud-based access to local files. The challenge here is that access is dependent on the gateway remaining available throughout the data lifecycle because, without it, data recall can be interrupted or cease entirely. ... It then becomes practical to strike a much better balance across the typical enterprise storage technology stack, including long-term data preservation and compliance, where data doesn’t need to be accessed so often, but where reliability and security are crucial.


The Impact of Regular Training and Timely Security Policy Changes on Dev Teams

Constructive refresher training drives continuous improvement by reinforcing existing knowledge while introducing new concepts like AI-powered code generation, automated debugging and cross-browser testing in manageable increments. Teams that implement consistent training programs see significant productivity benefits as developers spend less time struggling with unfamiliar tools and more time automating tasks to focus on delivering higher value. ... Security policies that remain static as teams grow create dangerous blind spots, compromising both the team’s performance and the organization’s security posture. Outdated policies fail to address emerging threats like malware infections and often become irrelevant to the team’s current workflow, leading to workarounds and system vulnerabilities. ... Proactive security integration into development workflows represents a fundamental shift from reactive security measures to preventative strategies. This approach enables growing teams to identify and address security concerns early in the development process, reducing the cost and complexity of remediation. Cultivating a security-first culture becomes increasingly important as teams grow. This involves embedding security considerations into various stages of the development life cycle. Early risk identification in cloud infrastructure reduces costly breaches and improves overall team productivity.

Daily Tech Digest - April 30, 2025


Quote for the day:

"You can’t fall if you don’t climb. But there’s no joy in living your whole life on the ground." -- Unknown


Common Pitfalls and New Challenges in IT Automation

“You don’t know what you don’t know and can’t improve what you can’t see. Without process visibility, automation efforts may lead to automating flawed processes. In effect, accelerating problems while wasting both time and resources and leading to diminished goodwill by skeptics,” says Kerry Brown, transformation evangelist at Celonis, a process mining and process intelligence provider. The aim of automating processes is to improve how the business performs. That means drawing a direct line from the automation effort to a well-defined ROI. ... Data is arguably the most boring issue on IT’s plate. That’s because it requires a ton of effort to update, label, manage and store massive amounts of data and the job is never quite done. It may be boring work, but it is essential and can be fatal if left for later. “One of the most significant mistakes CIOs make when approaching automation is underestimating the importance of data quality. Automation tools are designed to process and analyze data at scale, but they rely entirely on the quality of the input data,” says Shuai Guan, co-founder and CEO at Thunderbit, an AI web scraper tool. ... "CIOs often fall into the trap of thinking automation is just about suppressing noise and reducing ticket volumes. While that’s one fairly common use case, automation can offer much more value when done strategically,” says Erik Gaston


Outmaneuvering Tariffs: Navigating Disruption with Data-Driven Resilience

The fact that tariffs are coming was expected – President Donald Trump campaigned promising tariffs – but few could have expected their severity (145% on Chinese imports, as of this writing) and their pace of change (prohibitively high “reciprocal” tariffs on 100+ countries, only to be temporarily rescinded days later). Also unpredictable were second-order effects such as stock and bond market reactions, affecting the cost of capital, and the impact on consumer demand, due to the changing expectations of inflation or concerns of job loss. ... Most organizations will have fragmented views of data, including views of all of the components that come from a given supplier or are delivered through a specific transportation provider. They may have a product-centric view that includes all suppliers that contribute all of the components of a given product. But this data often resides in a variety of supplier-management apps, procurement apps, demand forecasting apps, and other types of apps. Some may be consolidated into a data lakehouse or a cloud data warehouse to enable advanced analytics, but the time required by a data engineering team to build the necessary data pipelines from these systems is often multiple days or weeks, and such pipelines will usually only be implemented for scenarios that the business expects will be stable over time.


The state of intrusions: Stolen credentials and perimeter exploits on the rise, as phishing wanes

What’s worrying is that in over half of intrusions (57%) the victim organizations learned about the compromise of their networks and systems from a third-party rather than discovering them through internal means. In 14% of cases, organizations were notified directly by attackers, usually in the form of ransom notes, but 43% of cases involved external entities such as a cybersecurity company or law enforcement agencies. The average time attackers spent inside a network until being discovered last year was 11 days, a one-day increase over 2023, though still a major improvement versus a decade ago when the average discovery time was 205 days. Attacker dwell time, as Mandiant calls it, has steadily decreased over the years, which is a good sign ... In terms of ransomware, the most common infection vector observed by Mandiant last year were brute-force attacks (26%), such as password spraying and use of common default credentials, followed by stolen credentials and exploits (21% each), prior compromises resulting in sold access (15%), and third-party compromises (10%). Cloud accounts and assets were compromised through phishing (39%), stolen credentials (35%), SIM swapping (6%), and voice phishing (6%). Over two-thirds of cloud compromises resulted in data theft and 38% were financially motivated with data extortion, business email compromise, ransomware, and cryptocurrency fraud being leading goals.


Three Ways AI Can Weaken Your Cybersecurity

“Slopsquatting” is a fresh AI take on “typosquatting,” where ne’er-do-wells spread malware to unsuspecting Web travelers who happen to mistype a URL. With slopsquatting, the bad guys are spreading malware through software development libraries that have been hallucinated by GenAI. ... While it is still unclear whether the bad guys have weaponized slopsquatting yet, GenAI’s tendency to hallucinate software libraries is perfectly clear. Last month, researchers published a paper that concluded that GenAI recommends Python and JavaScript libraries that don’t exist about one-fifth of the time. ... Like the SQL injection attacks that plagued early Web 2.0 warriors who didn’t adequately validate database input fields, prompt injections involve the surreptitious injection of a malicious prompt into a GenAI-enabled application to achieve some goal, ranging from information disclosure and code execution rights. Mitigating these sorts of attacks is difficult because of the nature of GenAI applications. Instead of inspecting code for malicious entities, organizations must investigate the entirery of a model, including all of its weights. ... A form of adversarial AI attacks, data poisoning or data manipulation poses a serious risk to organizations that rely on AI. According to the security firm CrowdStrike, data poisoning is a risk to healthcare, finance, automotive, and HR use cases, and can even potentially be used to create backdoors.


AI Has Moved From Experimentation to Execution in Enterprise IT

According to the SOAS report, 94% of organisations are deploying applications across multiple environments—including public clouds, private clouds, on-premises data centers, edge computing, and colocation facilities—to meet varied scalability, cost, and compliance requirements. Consequently, most decision-makers see hybrid environments as critical to their operational flexibility. 91% cited adaptability to fluctuating business needs as the top benefit of adopting multiple clouds, followed by improved app resiliency (68%) and cost efficiencies (59%). A hybrid approach is also reflected in deployment strategies for AI workloads, with 51% planning to use models across both cloud and on-premises environments for the foreseeable future. Significantly, 79% of organisations recently repatriated at least one application from the public cloud back to an on-premises or co-location environment, citing cost control, security concerns, and predictability. ... “While spreading applications across different environments and cloud providers can bring challenges, the benefits of being cloud-agnostic are too great to ignore. It has never been clearer that the hybrid approach to app deployment is here to stay,” said Cindy Borovick, Director of Market and Competitive Intelligence,


Trying to Scale With a Small Team? Here's How to Drive Growth Without Draining Your Resources

To be an effective entrepreneur or leader, communication is key, and being able to prioritize initiatives that directly align with the overall strategic vision ensures that your lean team is working on projects that have the greatest impact. Integrate key frameworks such as Responsible, Accountable, Consulted, and Informed (RACI) and Objectives and Key Results (OKRs) to maintain transparency, focus and measure progress. By focusing efforts on high-impact activities, your lean team can achieve high success and significant results without the unnecessary strain usually attributable to early-stage organizations. ... Many think that agile methodologies are only for the fast-moving software development industry — but in reality, the frameworks are powerful tools for lean teams in any industry. Encouraging the right culture is key where quick pivots, regular genuine feedback loops and leadership that promotes continuous improvement are part of the everyday workflows. This agile mindset, when adopted early, helps teams rapidly respond to market changes and client issues. ... Trusting others builds rapport. Assigning clear ownership of tasks while allowing those team members the autonomy to execute the strategies creatively and efficiently, while also allowing them to fail, is how trust is created.


Effecting Culture Changes in Product Teams

Depending on the organization, the responsibility of successfully leading a culture shift among the product team could fall to various individuals – the CPO, VP of product development, product manager, etc. But regardless of the specific title, to be an effective leader, you can’t assume you know all the answers. Start by having one-to-one conversations with numerous members on the product/engineering team. Ask for their input and understand, from their perspective, what is working, what’s not working, and what ideas they have for how to accelerate product release timelines. After conducting one-to-one discussions, sit down and correlate the information. Where are the common denominators? Did multiple team members make the same suggestions? Identify the roadblocks that are slowing down the product team or standing in the way of delivering incremental value on a more regular basis. In many cases, tech leaders will find that their team already knows how to fix the issue – they just need permission to do things a bit differently and adjust company policies/procedures to better support a more accelerated timeline. Talking one-on-one with team members also helps resolve any misunderstandings around why the pace of work must change as the company scales and accumulates more customers. Product engineers often have a clear vision of what the end product should entail, and they want to be able to deliver on that vision.


Microsoft Confirms Password Spraying Attack — What You Need To Know

The password spraying attack exploited a command line interface tool called AzureChecker to “download AES-encrypted data that when decrypted reveals the list of password spray targets,” the report said. It then, to add salt to the now open wound, accepted an accounts.txt file containing username and password combinations used for the attack, as input. “The threat actor then used the information from both files and posted the credentials to the target tenants for validation,” Microsoft explained. The successful attack enabled the Storm-1977 hackers to then leverage a guest account in order to create a compromised subscription resource group and, ultimately, more than 200 containers that were used for cryptomining. ... Passwords are no longer enough to keep us safe online. That’s the view of Chris Burton, head of professional services at Pentest People, who told me that “where possible, we should be using passkeys, they’re far more secure, even if adoption is still patchy.” Lorri Janssen-Anessi, director of external cyber assessments at BlueVoyant is no less adamant when it comes to going passwordless. ... And Brian Pontarelli, CEO of FusionAuth, said that the teams who are building the future of passwords are the same ones that are building and managing the login pages of their apps. “Some of them are getting rid of passwords entirely,” Pontarelli said


The secret weapon for transformation? Treating it like a merger

Like an IMO, a transformation office serves as the conductor — setting the tempo, aligning initiatives and resolving portfolio-level tensions before they turn into performance issues. It defines the “music” everyone should be playing: a unified vision for experience, business architecture, technology design and most importantly, change management. It also builds connective tissue. It doesn’t just write the blueprint — it stays close to initiative or project leads to ensure adherence, adapts when necessary and surfaces interdependencies that might otherwise go unnoticed. ... What makes the transformation office truly effective isn’t just the caliber of its domain leaders — it’s the steering committee of cross-functional VPs from core business units and corporate functions that provides strategic direction and enterprise-wide accountability. This group sets the course, breaks ties and ensures that transformation efforts reflect shared priorities rather than siloed agendas. Together, they co-develop and maintain a multi-year roadmap that articulates what capabilities the enterprise needs, when and in what sequence. Crucially, they’re empowered to make decisions that span the legacy seams of the organization — the gray areas where most transformations falter. In this way, the transformation office becomes more than connective tissue; it becomes an engine for enterprise decision-making.


Legacy Modernization: Architecting Real-Time Systems Around a Mainframe

When traffic spikes hit our web portal, those requests would flow through to the mainframe. Unlike cloud systems, mainframes can't elastically scale to handle sudden load increases. This created a bottleneck that could overload the mainframe, causing connection timeouts. As timeouts increased, the mainframe would crash, leading to complete service outages with a large blast radius, hundreds of other applications which depend on the mainframe would also be impacted. This is a perfect example of the problems with synchronous connections to the mainframes. When the mainframes could be overwhelmed by a highly elastic resource like the web, the result could be failure in datastores, and sometimes that failure could result in all consuming applications failing. ... Change Data Capture became the foundation of our new architecture. Instead of batch ETLs running a few times daily, CDC streamed data changes from the mainframes in near real-time. This created what we called a "system-of-reference" - not the authoritative source of truth (the mainframe remains "system-of-record"), but a continuously updated reflection of it. The system of reference is not a proxy of the system of record, which is why our website was still live when the mainframe went down.

Daily Tech Digest - April 03, 2025


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart


Veterans are an obvious fit for cybersecurity, but tailored support ensures they succeed

Both civilian and military leaders have long seen veterans as strong candidates for cybersecurity roles. The National Initiative for Cybersecurity Careers and Studies, part of the US Cybersecurity and Infrastructure Security Agency (CISA), speaks directly to veterans, saying “Your skills and training from the military translate well to a cyber career.” NICCS continues, “Veterans’ backgrounds in managing high-pressure situations, attention to detail, and understanding of secure communications make them particularly well-suited for this career path.” Gretchen Bliss, director of cybersecurity programs at the University of Colorado at Colorado Springs (UCCS), speaks specifically to security execs on the matter: “If I were talking to a CISO, I’d say get your hands on a veteran. They understand the practical application piece, the operational piece, they have hands-on experience. They think things through, they know how to do diagnostics. They already know how to tackle problems.” ... And for veterans who haven’t yet mastered all that, Andrus advises “networking with people who actually do the job you want.” He also advises veterans to learn about the environment at the organization they seek to join, asking themselves whether they’d fit in. And he recommends connecting with others to ease the transition.


The 6 disciplines of strategic thinking

A strategic thinker is not just a good worker who approaches a challenge with the singular aim of resolving the problem in front of them. Rather, a strategic thinker looks at and elevates their entire ecosystem to achieve a robust solution. ... The first discipline is pattern recognition. A foundation of strategic thinking is the ability to evaluate a system, understand how all its pieces move, and derive the patterns they typically form. ... Watkins’s next discipline, and an extension of pattern recognition, is systems analysis. It is easy to get overwhelmed when breaking down the functional elements of a system. A strategic thinker avoids this by creating simplified models of complex patterns and realities. ... Mental agility is Watkins’s third discipline. Because the systems and patterns of any work environment are so dynamic, leaders must be able to change their perspective quickly to match the role they are examining. Systems evolve, people grow, and the larger picture can change suddenly. ... Structured problem-solving is a discipline you and your team can use to address any issue or challenge. The idea of problem-solving is self-explanatory; the essential element is the structure. Developing and defining a structure will ensure that the correct problem is addressed in the most robust way possible.


Why Vendor Relationships Are More Important Than Ever for CIOs

Trust is the necessary foundation, which is built through open communication, solid performance, relevant experience, and proper security credentials and practices. “People buy from people they trust, no matter how digital everything becomes,” says Thompson. “That human connection remains crucial, especially in tech where you're often making huge investments in mission-critical systems.” ... An executive-level technology governance framework helps ensure effective vendor oversight. According to Malhotra, it should consist of five key components, including business relationship management, enterprise technology investment, transformation governance, value capture and having the right culture and change management in place. Beneath the technology governance framework is active vendor governance, which institutionalizes oversight across ten critical areas including performance management, financial management, relationship management, risk management, and issues and escalations. Other considerations include work order management, resource management, contract and compliance, having a balanced scorecard across vendors and principled spend and innovation.


Shadow Testing Superpowers: Four Ways To Bulletproof APIs

API contract testing is perhaps the most immediately valuable application of shadow testing. Traditional contract testing relies on mock services and schema validation, which can miss subtle compatibility issues. Shadow testing takes contract validation to the next level by comparing actual API responses between versions. ... Performance testing is another area where shadow testing shines. Traditional performance testing usually happens late in the development cycle in dedicated environments with synthetic loads that often don’t reflect real-world usage patterns. ... Log analysis is often overlooked in traditional testing approaches, yet logs contain rich information about application behavior. Shadow testing enables sophisticated log comparisons that can surface subtle issues before they manifest as user-facing problems. ... Perhaps the most innovative application of shadow testing is in the security domain. Traditional security testing often happens too late in the development process, after code has already been deployed. Shadow testing enables a true shift left for security by enabling dynamic analysis against real traffic patterns. ... What makes these shadow testing approaches particularly valuable is their inherently low-maintenance nature. 


Rethinking technology and IT's role in the era of agentic AI and digital labor

Rethinking technology and the role of IT will drive a shift from the traditional model to a business technology-focused model. One example will be the shift from one large, dedicated IT team that traditionally handles an organization's technology needs, overseen and directed by the CIO, to more focused IT teams that will perform strategic, high-value activities and help drive technology innovation strategy as Gen AI handles many routine IT tasks. Another shift will be spending and budget allocations. Traditionally, CIOs manage the enterprise IT budget and allocation. In the new model, spending on enterprise-wide IT investments continues to be assessed and guided by the CIO, and some enterprise technology investments are now governed and funded by the business units. ... Today, agentic AI is not just answering questions -- it's creating. Agents take action autonomously. And it's changing everything about how technology-led enterprises must design, deploy, and manage new technologies moving forward. We are building self-driving autonomous businesses using agentic AI where humans and machines work together to deliver customer success. However, giving agency to software or machines to act will require a new currency. Trust is the new currency of AI.


From Chaos to Control: Reducing Disruption Time During Cyber Incidents and Breaches

Cyber disruptions are no longer isolated incidents; they have ripple effects that extend across industries and geographic regions. In 2024, two high-profile events underscored the vulnerabilities in interconnected systems. The CrowdStrike IT outage resulted in widespread airline cancellations, impacting financial markets and customer trust, while the Change Healthcare ransomware attack disrupted claims processing nationwide, costing billions in financial damages. These cases emphasize why resilience professionals must proactively integrate automation and intelligence into their incident response strategies. ... Organizations need structured governance models that define clear responsibilities before, during, and after an incident. AI-driven automation enables proactive incident detection and streamlined responses. Automated alerts, digital action boards, and predefined workflows allow teams to act swiftly and decisively, reducing downtime and minimizing operational losses. Data is the foundation of effective risk and resilience management. When organizations ensure their data is reliable and comprehensive, they gain an integrated view that enhances visibility across business continuity, IT, and security teams. 


What does an AI consultant actually do?

AI consulting involves advising on, designing and implementing artificial intelligence solutions. The spectrum is broad, ranging from process automation using machine learning models to setting up chatbots and performing complex analyses using deep learning methods. However, the definition of AI consulting goes beyond the purely technical perspective. It is an interdisciplinary approach that aligns technological innovation with business requirements. AI consultants are able to design technological solutions that are not only efficient but also make strategic sense. ... All in all, both technical and strategic thinking is required: Unlike some other technology professions, AI consulting not only requires in-depth knowledge of algorithms and data processing, but also strategic and communication skills. AI consultants talk to software development and IT departments as well as to management, product management or employees from the relevant field. They have to explain technical interrelations clearly and comprehensibly so that the company can make decisions based on this knowledge. Since AI technologies are developing rapidly, continuous training is important. Online courses, boot camps and certificates as well as workshops and conferences. 


Building a cybersecurity strategy that survives disruption

The best strategies treat resilience as a core part of business operations, not just a security add-on. “The key to managing resilience is to approach it like an onion,” says James Morris, Chief Executive of The CSBR. “The best strategy is to be effective at managing the perimeter. This approach will allow you to get a level of control on internal and external forces which are key to long-term resilience.” That layered thinking should be matched by clearly defined policies and procedures. “Ensure that your ‘resilience’ strategy and policies are documented in detail,” Morris advises. “This is critical for response planning, but also for any legal issues that may arise. If it’s not documented, it doesn’t happen.” ... Move beyond traditional monitoring by implementing advanced, behaviour-based anomaly detection and AI-driven solutions to identify novel threats. Invest in automation to enhance the efficiency of detection, triage, and initial response tasks, while orchestration platforms enable coordinated workflows across security and IT tools, significantly boosting response agility. ... A good strategy starts with the idea that stuff will break. So you need things like segmentation, backups, and backup plans for your backup plans, along with alternate ways to get back up and running. Fast, reliable recovery is key. Just having backups isn’t enough anymore.


3 key features in Kong AI Gateway 3.10

For teams working with sensitive or regulated data, protecting personally identifiable information (PII) in AI workflows is not optional, it’s essential for proper governance. Developers often use regex libraries or handcrafted filters to redact PII, but these DIY solutions are prone to error, inconsistent enforcement, and missed edge cases. Kong AI Gateway 3.10 introduces out-of-the-box PII sanitization, giving platform teams a reliable, enterprise-grade solution to scrub sensitive information from prompts before they reach the model. And if needed, reinserting sanitized data in the response before it returns to the end user. ... As organizations adopt multiple LLM providers and model types, complexity can grow quickly. Different teams may prefer OpenAI, Claude, or open-source models like Llama or Mistral. Each comes with its own SDKs, APIs, and limitations. Kong AI Gateway 3.10 solves this with universal API support and native SDK integration. Developers can continue using the SDKs they already rely on (e.g., AWS, Azure) while Kong translates requests at the gateway level to interoperate across providers. This eliminates the need for rewriting app logic when switching models and simplifies centralized governance. This latest release also includes cost-based load balancing, enabling Kong to route requests based on token usage and pricing. 


The future of IT operations with Dark NOC

From a Managed Service Provider (MSP) perspective, Dark NOC will shift the way IT operates today by making it more efficient, scalable, and cost-effective. It will replace Traditional NOC’s manual-intensive task of continuous monitoring, diagnosing, and resolving issues across multiple customer environments. ... Another key factor that Dark NOC enables MSPs is scalability. Its analytics and automation capability allows it to manage thousands of endpoints effortlessly without proportionally increasing engineers’ headcount. This enables MSPs to extend their service portfolios, onboard new customers, and increase profit margins while retaining a lean operational model. From a competitive point of view, adopting Dark NOC enables MSPs to differentiate themselves from competitors by offering proactive, AI-driven IT services that minimise downtime, enhance security and maximise performance. Dark NOC helps MSPs provide premium service at affordable price points to customers while making a decent margin internally. ... Cloud infrastructure monitoring & management (Provides real-time cloud resource monitoring and predictive insights). Examples include AWS CloudWatch, Azure Monitor, and Google Cloud Operations Suite.