Showing posts with label neocloud. Show all posts
Showing posts with label neocloud. Show all posts

Daily Tech Digest - February 23, 2026


Quote for the day:

"Prepare, work smarter, Learn from your Mistakes. These are the secret to success!" -- Elizabeth McCormick



What’s wrong (and right) with AI coding agents

“At the scale AI is generating pull requests today, humans simply can’t keep up. You don’t check the accuracy of Excel with an abacus… and in 2026 we shouldn’t expect maintainers to manually inspect machine-speed code without machine-speed assistance,” said Fox. “AI reviews can go deeper than humans in many cases. They don’t get tired, they can reason across large codebases… and they can spot patterns at a scale no individual reviewer can hold in their head. If AI is generating more code, the only viable answer is to use AI to help review and validate it. You have to fight fire with fire.” ... He reminds us that quantity does not always equal quality – especially in the AI-driven world we now live in. He notes that, at least for now, the reality is that AI development tools and ‘vibe coding’ can generate a lot of code very quickly, but code that’s often slower and more memory‑hungry than what a skilled developer would write. ... Although this entire discussion is focused on the now-increasingly-automated command line, it feels like the real focus should be higher and architecture has been mentioned already. “We’re entering a world where, with AI, software changes are propagating faster than governance models can track them. That means AI tools are, plain and simple, accelerating systemic complexity. When an AI agent can generate and deploy changes across interconnected enterprise systems, there’s real danger in the invisible dependencies and downstream effects most orgs can’t fully see,” said Ido Gaver


Identity verification systems are struggling with synthetic fraud

The researchers tied the growth of synthetic identity fraud to the increasing use of AI tools, which can generate convincing fake documents that pass casual inspection. “The biggest risk I see in the next 12 to 18 months is the growing and advancing use of AI. AI is creating fake people, fake voices, and fake documents. Bad actors are using these capabilities to open accounts, take over existing accounts, and impersonate real people in places like bank branches,” Lewis said. ... Financial institutions remain a major target for identity fraud due to access to credit, account funding, and cash movement. A successful fraudster can monetize a single fake or synthetic identity for tens of thousands of dollars before detection, making the sector a frequent target. Online-only retail banks recorded the highest rate of failed identity verification among the financial institution categories in Intellicheck’s dataset. The report also found elevated failure rates across businesses serving underbanked consumers, including check cashing, payday lending, subprime lending, and lease-to-own services. ... AI tools are being used to produce synthetic IDs that are difficult for humans to spot. Lewis said attackers are already using AI and large language models to generate documents that can bypass basic checks. “AI and LLM can create fake ID’s that can easily pass the templating test, old methods don’t work and ID verification service providers can’t rest on their laurels,” Lewis said. 


Neoclouds: Meeting demand for AI acceleration

This surge in demand for AI acceleration has seen a surprising benefactor. According to Tiger Research, cryptocurrency mining firms, seeking to reduce their exposure to bitcoin’s volatile pricing, are redirecting their graphics processing unit (GPU) farms toward AI acceleration applications. ... Before the emergence of neoclouds a few years ago, if an organisation wanted to work with AI, it had no choice but to go to a hyperscaler like Amazon Web Services (AWS) or Google. While the hyperscalers offer AI infrastructure as part of their vast public cloud services portfolio, Roy Illsley, chief analyst at Omdia, says the hyperscalers tend to be expensive and, as he recalls, a few years ago, there was very little choice other than Google’s AI offerings. ... AI infrastructure strategies are becoming inherently hybrid and multicloud by design – not as a by-product of supplier sprawl, but as a deliberate response to workload reality. The cloud market is fragmenting along functional lines, and neoclouds occupy a clear and growing role within that landscape. “Neoclouds started as GPU as a service. If you needed GPUs, these companies bought or leased GPUs from Nvidia, and then they would slice them and sell them off to people in smaller groups and bundles,” says Omdia’s Illsley. However, over time, neocloud providers have added software stacks and developed other services to meet the demand of IT buyers who need GPU power and the software stack required for AI training or AI inferencing.


Sam Altman just said what everyone is thinking about AI layoffs

This isn’t the first time industry stakeholders questioned the veracity of AI-related layoffs. A study by Oxford Economics in January this year claimed most layoffs are due to “more traditional drivers” such as overhiring or poor financial performance. ... "While a rising number of firms are pinning job losses on AI, other more traditional drivers of job layoffs are far more commonly cited,” the report said. “What's more, we suspect some firms are trying to dress up layoffs as a good news story rather than bad news, such as past over-hiring." ... “There’s some real displacement by AI of different kinds of jobs,” he said. “We’ll find new kinds of jobs as we do with every tech revolution. I would expect that the real impact of AI doing jobs in the next few years will begin to be palpable.” Altman’s prediction here aligns with research from Gartner and Forrester on the potential impact of AI on the global jobs market. In January, Forrester predicted 10 million jobs could be lost worldwide as enterprise adoption ramps up. ... Despite a string of studies pointing to the contrary, some tech industry figures still believe that AI will eventually render some workers obsolete. In a recent interview with the Financial Times, for example, Microsoft AI CEO Mustafa Suleyman insisted AI will begin replacing “white collar” workers within 18 months. “I think we’re going to have a human-level performance on most if not all professional tasks,” Suleyman told


Jailbreaking the matrix: How researchers are bypassing AI guardrails to make them safer

As AI assistants move from novelty to infrastructure, helping write code, summarizing medical notes and answering customer questions, the biggest question isn't just what these systems can do, but what happens when they are pushed to do what they shouldn't. "By showing exactly how these defenses break, we give AI developers the information they need to build defenses that actually hold up," Jha said. "The public release of powerful AI is only sustainable if the safety measures can withstand real scrutiny, and right now, our work shows that there's still a gap. We want to help close it." ... Focusing on the internal workings of the LLM allows more accurate measurements of failures while encouraging the development of more robust defenses against the failure of safety measures. According to the researchers, HMNS can help reveal whether specific internal pathways, if exploited, could cause a breakdown. That information can guide stronger training, monitoring and defense strategies. ... Understanding the security shortcomings of LLMs is critical as they become more widespread. Companies like Meta, Alibaba and others have released powerful AI models that are available to anyone. While each platform incorporates safety layers meant to keep it from being misused, the UF team has found that those safety layers can be systematically bypassed.


Plan vs. planning: Why continuous planning must traverse time

The problem is not the plan’s quality. The problem is that a plan freezes a moment in time while the organization continues to move through time. Planning, by contrast, must be a continuous discipline, remaining active as assumptions decay, signals emerge and constraints shift. ... Planning exists to test those assumptions continuously, a distinction long recognized in leadership and management literature that separates planning as an ongoing discipline from planning as a static artifact. Plans are optimized for agreement and commitment. Planning is optimized for learning, decision-making and managing consequences in the face of uncertainty. In practice, this means consequences must be visible at the moment of decision, not discovered months later through execution. ... Many enterprises optimize for compliance, predictability and approval at the expense of feedback and adaptation. Learning is pushed downstream, arriving only after outcomes are locked in and costs incurred. Systems theorist Russell Ackoff described this dynamic clearly: “Most organizations are not short of information. They are short of the ability to learn from it.” Continuous planning restores learning by design, not as postmortem analysis, but as pre-decision feedback. Feedback that arrives before commitment changes behavior. Feedback that arrives after execution becomes an explanation. In volatile environments, that timing difference is decisive, which is why scenario planning and structured foresight have re-emerged as critical executive tools.


The rise of AI factories: Powering an era of pervasive intelligence

In India alone, Google is building a gigawatt-scale AI hub in Visakhapatnam. Microsoft is expanding its cloud and AI footprint in Pune and Chennai and creating a new “India South Central” region in Hyderabad. In partnership with NVIDIA, Reliance Jio is developing a major AI data center in Jamnagar for nationwide GPU-as-a-service offerings. TCS is planning a 1-gigawatt AI data center, likely in Gujarat or Maharashtra, to support startups, hyperscalers, and government institutions. And as part of its Stargate project, OpenAI is actively scouting locations in India for what could become one of the largest AI data centers in all of Asia. ... The growth of AI represents a fundamental transformation in how the world builds and operates computing infrastructure. While traditional data centers are designed for general-purpose workloads, AI superclusters are purpose-built facilities that function as industrial-scale intelligence production systems. And their output is defined by new metrics — most notably tokens per watt and tokens per dollar — that quantify the efficiency and productivity of intelligence at scale. ... To deliver the performance at scale that AI requires, silicon designers are increasingly turning to multi-die designs, including 3D integrated circuits (3DIC) and chiplet-based architectures. While these chip designs offer gains that traditional monolithic SoCs cannot achieve cost-effectively, they also introduce significant complexity to the design process.


Cognizant CAIO Babak Hodjat explains how Agentic AI will transform enterprises

One of the things that agentic systems do is they allow for a diversity of data sources because you can actually have an agent responsible for a data source talking to other agents responsible for other data sources. Your interface into this system could be a consolidation of information and decisions that come from these disparate sources. It is the first time that we can actually have a mapping between intent and disparate sources of data and applications. I think that will work well. That kind of design can work well in a country like India with such diversity of data. ... Population-based approaches like genetic algorithms are very good at non-linear optimisation, especially if you are looking at multiple outcomes at the same time. Pretty much every problem that we look at is multi-objective. Every problem that we look at has improved revenue but reduced costs. You look at curing disease but reduce impact on the economy. It is always more than one outcome that we are looking at. In problems like optimisation of power grids or managing urban traffic systems, these are very well-suited algorithms. ... There are two opposing forces when it comes to AI. Scaling laws mean that building bigger is more powerful, and building bigger typically means using more energy. Many companies are looking at green sources for that additional consumption. On the other hand, companies are optimising models to be smaller and less energy-hungry. For multi-agent systems, smaller models can be more cost-effective and greener.


Inference Becomes the Next AI Chip Battleground

Inference has fundamentally different economics and performance requirements than training, said Karl Freund, founder and principal analyst at Cambrian AI Research. Training AI models is a cost center, while inference is a “profit center” that directly generates revenue. Freund and Kimball noted that while GPUs deliver excellent performance, they often carry architectural features optimized for training that don’t always translate to lower latency or higher efficiency in pure inference use cases. Purpose-built inference chips – ASICs and other accelerators – can deliver faster responses, improved energy efficiency, and lower total cost of ownership. ... "As inference workloads exceed the total amount of training workloads in terms of token output, there will be a greater need for diversity because alternative XPU architectures can achieve better efficiency on some specific inferencing tasks,” said Brendan Burke, research director of semiconductors, supply chain, and emerging tech at Futurum Group. ... Inference opportunities span data centers and the edge, and requirements vary widely by workload and deployment. “The inference you do in your autonomous vehicle is far different than the inferencing you do when you’re an online customer service bot,” Kimball said. ... Analysts expect Nvidia to maintain dominance in both training and inference, but diverse requirements create space for specialized solutions to capture share. 


Why the CFO's Playbook Belongs on Every CIO's Desk

Recent research from Gartner on how CFOs are allocating budgets gives CIOs insight into what priorities look like across departments, and where technology and AI can help move the needle. The research firm's CFO Report: Q1 2026 finds that while budgets are shifting and AI ambitions are high, enterprise-wide AI success remains an aspiration rather than a reality. ... AI is also changing the conversation on ROI for both finance and technology leaders. "There's a lot more to evaluating the success of some of this investment in technology than simply just ROI, and AI is definitely helping change that," Abbasi said. "AI isn't your traditional asset." Unlike standard hardware expenditures, AI investments don't have predictable depreciation curves, and the ways in which returns on AI investment may show up across the business can vary. They may manifest in time to market, customer satisfaction or competitive positioning, not just in cost savings, Abbasi said. CIOs should be sure to articulate how AI will generate strategic returns rather than focus on pitching it as a capital project. "It changes the way you measure the effectiveness of AI, as well as how you measure your business more holistically," he said. "It's not like a traditional asset because you don't necessarily know what the outcomes are going to be for some of these AI projects."

Daily Tech Digest - October 22, 2025


Quote for the day:

"Good content isn't about good storytelling. It's about telling a true story well." -- Ann Handley



When yesterday’s code becomes today’s threat

A striking new supply chain attack is sending shockwaves through the developer community: a worm-style campaign dubbed “Shai-Hulud” has compromised at least 187 npm packages, including the tinycolor package that has 2 million hits weekly, and spreading to other maintainers' packages. The malicious payload modifies package manifests, injects malicious files, repackages, and republishes — thereby infecting downstream projects. This incident underscores a harsh reality: even code released weeks, months, or even years ago can become dangerous once a dependency in its chain has been compromised. ... Sign your code: All packages/releases should use cryptographic signing. This allows users to verify the origin and integrity of what they are installing. Verify signatures before use: When pulling in dependencies, CI/CD pipelines, and even local dev setups, include a step to check that the signature matches a trusted publisher and that the code wasn’t tampered with. SBOMs are your map of exposure: If you have a Software Bill of Materials for your project(s), you can query it for compromised packages. Find which versions/packages have been modified — even retroactively — so you can patch, remove, or isolate them. Continuous monitoring of risk posture: It's not enough to secure when you ship. You need alerts when any dependency or component’s risk changes: new vulnerabilities, suspicious behavior, misuse of credentials, or signs that a trusted package may have been modified after release.


Cloud Sovereignty: Feature. Bug. Feature. Repeat!

Cloud sovereignty isn’t just a buzzword anymore, argues Kushwaha. “It’s a real concern for businesses across the world. The pattern is clear. The cloud isn’t a one-size-fits-all solution anymore. Companies are starting to realise that sometimes control, cost, and compliance matter more than convenience.” ... Cloud sovereignty is increasingly critical due to the evolving geopolitical scenario, government and industry-specific regulations, and vendor lock-ins with heavy reliance on hyperscalers. The concept has gained momentum and will continue to do so because technology has become pervasive and critical for running a state/country and any misuse by foreign actors can cause major repercussions, the way Bavishi sees it. Prof. Bhatt captures that true digital sovereignty is a distant dream and achieving this requires a robust ecosystem for decades. This isn’t counterintuitive; it’s evolution, as Kushwaha epitomises. “The cloud’s original promise was one of freedom. Today, when it comes to the cloud, freedom means more control. Businesses investing heavily in digital futures can’t afford to ignore the fine print in hyperscaler contracts or the reach of foreign laws. Sovereignty is the foundation for building safely in a fragmented world.” ... Organisations have recognised the risks of digital dependencies and are looking for better options. There is no turning back, Karlitschek underlines.


Securing AI to Benefit from AI

As organizations begin to integrate AI into defensive workflows, identity security becomes the foundation for trust. Every model, script, or autonomous agent operating in a production environment now represents a new identity — one capable of accessing data, issuing commands, and influencing defensive outcomes. If those identities aren't properly governed, the tools meant to strengthen security can quietly become sources of risk. The emergence of Agentic AI systems make this especially important. These systems don't just analyze; they may act without human intervention. They triage alerts, enrich context, or trigger response playbooks under delegated authority from human operators. ... AI systems are capable of assisting human practitioners like an intern that never sleeps. However, it is critical for security teams to differentiate what to automate from what to augment. Some tasks benefit from full automation, especially those that are repeatable, measurable, and low-risk if an error occurs. ... Threat enrichment, log parsing, and alert deduplication are prime candidates for automation. These are data-heavy, pattern-driven processes where consistency outperforms creativity. By contrast, incident scoping, attribution, and response decisions rely on context that AI cannot fully grasp. Here, AI should assist by surfacing indicators, suggesting next steps, or summarizing findings while practitioners retain decision authority. Finding that balance requires maturity in process design. 


The Unkillable Threat: How Attackers Turned Blockchain Into Bulletproof Malware Infrastructure

When EtherHiding emerged in September 2023 as part of the CLEARFAKE campaign, it introduced a chilling reality: attackers no longer need vulnerable servers or hackable domains. They’ve found something far better—a global, decentralized infrastructure that literally cannot be shut down. ... When victims visit the infected page, the loader queries a smart contract on Ethereum or BNB Smart Chain using a read-only function call. ... Forget everything you know about disrupting cybercrime infrastructure. There is no command-and-control server to raid. No hosting provider to subpoena. No DNS to poison. The malicious code exists simultaneously everywhere and nowhere, distributed across thousands of blockchain nodes worldwide. As long as Ethereum or BNB Smart Chain operates—and they’re not going anywhere—the malware persists. Traditional law enforcement tactics, honed over decades of fighting cybercrime, suddenly encounter an immovable object. You cannot arrest a blockchain. You cannot seize a smart contract. You cannot compel a decentralized network to comply. ... The read-only nature of payload retrieval is perhaps the most insidious feature. When the loader queries the smart contract, it uses functions that don’t create transactions or blockchain records. 


New 'Markovian Thinking' technique unlocks a path to million-token AI reasoning

Researchers at Mila have proposed a new technique that makes large language models (LLMs) vastly more efficient when performing complex reasoning. Called Markovian Thinking, the approach allows LLMs to engage in lengthy reasoning without incurring the prohibitive computational costs that currently limit such tasks. The team’s implementation, an environment named Delethink, structures the reasoning chain into fixed-size chunks, breaking the scaling problem that plagues very long LLM responses. Initial estimates show that for a 1.5B parameter model, this method can cut the costs of training by more than two-thirds compared to standard approaches. ... The researchers compared this to models trained with the standard LongCoT-RL method. Their findings indicate that the model trained with Delethink could reason up to 24,000 tokens, and matched or surpassed a LongCoT model trained with the same 24,000-token budget on math benchmarks. On other tasks like coding and PhD-level questions, Delethink also matched or slightly beat its LongCoT counterpart. “Overall, these results indicate that Delethink uses its thinking tokens as effectively as LongCoT-RL with reduced compute,” the researchers write. The benefits become even more pronounced when scaling beyond the training budget. 


The dazzling appeal of the neoclouds

While their purpose-built design gives them an advantage for AI workloads, neoclouds also bring complexities and trade-offs. Enterprises need to understand where these platforms excel and plan how to integrate them most effectively into broader cloud strategies. Let’s explore why this buzzword demands your attention and how to stay ahead in this new era of cloud computing. ... Neoclouds, unburdened by the need to support everything, are outpacing hyperscalers in areas like agility, pricing, and speed of deployment for AI workloads. A shortage of GPUs and data center capacity also benefits neocloud providers, which are smaller and nimbler, allowing them to scale quickly and meet growing demand more effectively. This agility has made them increasingly attractive to AI researchers, startups, and enterprises transitioning to AI-powered technologies. ... Neoclouds are transforming cloud computing by offering purpose-built, cost-effective infrastructure for AI workloads. Their price advantages will challenge traditional cloud providers’ market share, reshape the industry, and change enterprise perceptions, fueled by their expected rapid growth. As enterprises find themselves at the crossroads of innovation and infrastructure, they must carefully assess how neoclouds can fit into their broader architectural strategies. 


Wi-Fi 8 is coming — and it’s going to make AI a lot faster

Unlike previous generations of Wi-Fi that competed on peak throughput numbers, Wi-Fi 8 prioritizes consistent performance under challenging conditions. The specification introduces coordinated multi-access point features, dynamic spectrum management, and hardware-accelerated telemetry designed for AI workloads at the network edge. ... A core part of the Wi-Fi 8 architecture is an approach known as Ultra High Reliability (UHR). This architectural philosophy targets the 99th percentile user experience rather than best-case scenarios. The innovation addresses AI application requirements that demand symmetric bandwidth, consistent sub-5-millisecond latency and reliable uplink performance. ... Wi-Fi 8 introduces Extended Long Range (ELR) mode specifically for IoT devices. This feature uses lower data rates with more robust coding to extend coverage. The tradeoff accepts reduced throughput for dramatically improved range. ELR operates by increasing symbol duration and using lower-order modulation. This improves the link budget for battery-powered sensors, smart home devices and outdoor IoT deployments. ... Wi-Fi 8 enhances roaming to maintain sub-millisecond handoff latency. The specification includes improved Fast Initial Link Setup (FILS) and introduces coordinated roaming decisions across the infrastructure. Access points share client context information before handoff. 


Life, death, and online identity: What happens to your online accounts after death?

Today, we lack the tools (protocols) and the regulations to enable digital estate management at scale. Law and regulation can force a change in behavior by large providers. However, lacking effective protocols to establish a mechanism to identify the decedent’s chosen individuals who will manage their digital estate, every service will have to design their own path. This creates an exceptional burden on individuals planning their digital estate, and on individuals who manage the digital estates of the deceased. ... When we set out to write this paper, we wanted to influence the large technology and social media platforms, politicians, regulators, estate planners, and others who can help change the status quo. Further, we hoped to influence standards development organizations, such as the OpenID Foundation and the Internet Engineering Task Force (IETF), and their members. As standards developers in the realm of identity, we have an obligation to the people we serve to consider identity from birth to death and beyond, to ensure every human receives the respect they deserve in life and in death. Additionally, we wrote the planning guide to help individuals plan for their own digital estate. By giving people the tools to help describe, document, and manage their digital estates proactively, we can raise more awareness and provide tools to help protect individuals at one of the most vulnerable moments of their lives.


5 steps to help CIOs land a board seat

Serving on a board isn’t an extension of an operational role. One issue CIOs face is not understanding the difference between executive management and governance, Stadolnik says. “They’re there to advise, not audit or lead the current company’s CIO,” he adds. In the boardroom, the mandate is to provide strategy, governance, and oversight, not execution. That shift, Stadolnik says, can be jarring for tech leaders who’ve spent their careers driving operational results. ... “There were some broad risk areas where having strong technical leadership was valuable, but it was hard for boards to carve out a full seat just for that, which is why having CIO-plus roles was very beneficial,” says Cullivan. The issue of access is another uphill battle for CIOs. As Payne found, the network effect can play a huge role in seeking a board role. But not every IT leader has the right kind of network that can open the door to these opportunities. ... Boards expect directors to bring scope across business disciplines and issues, not just depth in one functional area. Stadolnik encourages CIOs to utilize their strategic orientation, results focus, and collaborative and influence skills to set themselves up for additional responsibilities like procurement, supply chain, shared services, and others. “It’s those executive leadership capabilities that will unlock broader roles,” he says. Experience in those broader roles bolsters a CIO’s board résumé and credibility.


Microservices Without Meltdown: 7 Pragmatic Patterns That Stick

A good sniff test: can we describe the service’s job in one short sentence, and does a single team wake up if it misbehaves? If not, we’ve drawn mural art, not an interface. Start with a small handful of services you can name plainly—orders, payments, catalog—then pressure-test them with real flows. When a request spans three services just to answer a simple question, that’s a hint we’ve sliced too thin or coupled too often. ... Microservices live and die by their contracts. We like contracts that are explicit, versioned, and backwards-friendly. “Backwards-friendly” means old clients keep working for a while when we add fields or new behaviors. For HTTP APIs, OpenAPI plus consistent error formats makes a huge difference. ... We need timeouts and retries that fit our service behavior, or we’ll turn small hiccups into big outages. For east-west traffic, a service mesh or smart gateway helps us nudge traffic safely and set per-route policies. We’re fans of explicit settings instead of magical defaults. ... Each service owns its tables; cross-service read needs go through APIs or asynchronous replication. When a write spans multiple services, aim for a sequence of local commits with compensating actions instead of distributed locks. Yes, we’re describing sagas without the capes: do the smallest thing, record it durably, then trigger the next hop.