Daily Tech Digest - January 03, 2026


Quote for the day:

“Some people dream of great accomplishments, while others stay awake and do them.” -- Anonymous


Cloud costs now No. 2 expense at midsize IT companies behind labor

The Cloud Capital survey shows midsize IT vendor CFOs and their CIO partners struggling to contain cloud spending, with significant cost volatility from month to month. Three-quarters of IT org CFOs report cloud spending forecasts varying between 5% and 10% of company revenues each month, Pingry notes. Costs of AI workloads are harder to predict than traditional SaaS infrastructure, Pingry adds, and organizations running major AI workloads are more likely to report margin declines tied to cloud spending than those with moderate AI exposure. “Training spikes, usage-driven inference, and experimentation noise introduce non-linear patterns that break the forecasting assumptions finance relies on,” says a report from Cloud Capital. “The challenge will intensify as AI’s share of cloud spend continues scaling.” ... Cloud services in themselves aren’t inherently too expensive, but many organizations shoot themselves in the foot through unintentional consumption, Clark adds. “Costs rise when the system is built without a clear understanding of the value it is meant to deliver,” he adds. ... “No CxO wants to explain to the board why another company used AI to leap ahead,” Clark adds. “This has created a no-holds-barred spending spree on training, inference, and data movement, often layered on top of architectures that were already economically incoherent.”


Securing Integration of AI into OT Technology

For critical infrastructure owners and operators, the goal is to use AI to increase efficiency and productivity, enhance decision-making, save costs, and improve customer experience – much like digitalization. However, despite the many benefits, integrating AI into operational technology (OT) environments that manage essential public services also introduces significant risks – such as OT process models drifting over time or safety-process bypasses – that owners and operators must carefully manage to ensure the availability and reliability of critical infrastructure. ... Understand the unique risks and potential impacts of AI integration into OT environments, the importance of educating personnel on these risks, and the secure AI development lifecycle. ... Assess the specific business case for AI use in OT environments and manage OT data security risks, the role of vendors, and the immediate and long-term challenges of AI integration. ... Implement robust governance mechanisms, integrate AI into existing security frameworks, continuously test and evaluate AI models, and consider regulatory compliance. ... Implement oversight mechanisms to ensure the safe operation and cybersecurity of AI-enabled OT systems, maintain transparency, and integrate AI into incident response plans.The agencies said critical infrastructure owners and operators should review this guidance so they can safely and securely integrate AI into OT systems. 


Rethinking Risk in a Connected World

As consumer behavior data proliferates and becomes increasingly available, it presents both an opportunity and a challenge for actuaries, Samuell says. Actuaries have the opportunity to better align expected and actual outcomes, while also facing the challenge of accounting for new sources of variability that traditional data does not capture. ... Keep in mind that incorporating behavioral factors into risk models does not guarantee certainty. A customer whom the model predicts to be at high risk of dishonesty may actually act honestly. “Ethical insurers must avoid treating predictive categories as definitive labels,” Samuell says. “Operational guidelines should ensure that all customers are treated with fairness and dignity, even as insurers make better use of available data.” ... Behavioral analytics is also changing how insurers engage with their customers. For example, by understanding how policyholders interact with digital platforms—including how often they log in, which features they use, and where they disengage—insurers can identify friction points and design more intuitive, personalized services. ... Consumer behavior data can also inform communication strategies for insurers. For example, “actuaries often want to be very precise, but data shows that can diminish comprehension of communications,” Stevenson says. ... In addition to data generated by insured individuals through technology, some insurance companies also use data from government and other sources in risk modeling. 


Inside the Cyber Extortion Boom: Phishing Gangs and Crime-as-a-Service Trends

Phishing attempts are growing in volume partly because organized crime groups no longer need technical knowledge to launch ransomware or other forms of cyber extortion: they can simply buy in the services they need. This ongoing trend is combined with emerging social engineering techniques, including multi-channel attacks, deep fakes and ClickFix exploits. Cybercriminals are also using AI to fine tune their operations, with more persuasive personalization, better translation into other languages and easier reconnaissance against high-value targets. It is becoming harder to detect and block attacks, and harder to train workforces to spot suspicious activity. ... “AI has increased the accuracy of a lot of phishing emails. Everybody was familiar with phishing emails you could spot it by the bad grammar and the poor formatting and stuff like that. Previously, a good attacker could create a good phishing email. All AI has done is allowed the attacker to generate good quality phishing emails at speed and at scale,” explained Richard Meeus, EMEA director of security strategy and technology at Akamai. ... For CISOs, wider cybersecurity and fraud prevention teams, recent developments in phishing and cyber extortion schemes will pose real challenges in the coming year. “User awareness still matters, but it isn’t enough,” cautioned Forescout’s Ferguson. “In a world of deepfake video, cloned voices and perfect written English, your control point can’t be ‘would our users spot this?’”


AI Fatigue: Is the backlash against AI already here?

The problem of AI fatigue is inevitable, but also to be expected, according to Dr Clare Walsh, director of education at the Institute of Analytics (IoA). “For those working in digital long enough. They know there is always a period after the initial excitement at the launch of a new technology when ordinary users start to see the costs and limitations of the latest technologies,” she says. “After 10 years of non-stop exciting advancements – from the first neural nets in 2016 to RAG solutions today – we may have forgotten this phase of disappointment was coming. It doesn’t negate the potential of AI technology – it is just an inevitable part of the adoption curve.” ... Holding back the tide of AI fatigue is also about not presenting it as the only solution to every problem, warns Claus Jepsen, Unit4’s CTO. “It is absolutely critical the IT team is asking the right questions and thoroughly interrogating the brief from the business,” he explains. “Quite often, AI is not the right answer. If you foist AI onto the business when they don’t want or need it, you’ll get a backlash. You can avoid the threat of AI fatigue if you listen carefully to your team and really appreciate how they want to interact with technology, where its use can be improved, and where it adds absolutely no value.” ... “AI fatigue is not just a productivity issue; it is a board-level risk,” she says. “When workflows are interrupted, or systems overlap, trust in technology erodes, driving disengagement, errors, and higher attrition. ...”


Why Cybersecurity Risk Management Will Continue to Increase in Complexity in 2026

The year 2026 ushers in tougher rules across regions and industries. Compliance pressure continues to build from multiple directions. By 2026, sector-specific and regional rules will grow tighter, from NIS2 enforcement across Europe to updated PCI DSS controls, alongside firmer privacy and AI oversight. Privacy laws continue tightening while new AI regulations add requirements around algorithmic transparency and data handling. Organizations are now juggling NIST frameworks, ISO 27001 certifications, and sector-specific mandates simultaneously. Each framework arrives with a valid intent, yet together they create layers of obligation that rarely align cleanly. This tension surfaced clearly in 2025, when more than forty CISOs from global enterprises urged the G7 and OECD to push for closer regulatory coordination. Their message was simple. Fragmented rules drain limited security resources and weaken collective response. ... The majority of organizations no longer run security in isolation. Daily operations depend on cloud providers, managed service partners, niche SaaS tools, and open-source libraries pulled into production without much ceremony. The problem keeps compounding: your vendors have their own vendors, creating chains of dependency that stretch impossibly far. You can secure your own network perfectly and still get breached because a third-party contractor left credentials exposed.


Seven steps to AI supply chain visibility — before a breach forces the issue

NIST’s AI Risk Management Framework, released in 2023, explicitly calls for AI-BOMs as part of its “Map” function, acknowledging that traditional software SBOMs don’t capture model-specific risks. But software dependencies resolve at build time and stay fixed. Conversely, model dependencies resolve at runtime, often fetching weights from HTTP endpoints during initialization, and mutate continuously through retraining, drift correction, and feedback loops. LoRA adapters modify weights without version control, making it impossible to track which model version is actually running in production. ... AI-BOMs are forensics, not firewalls. When ReversingLabs discovered nullifAI-compromised models, documented provenance would have immediately identified which organizations downloaded them. That’s invaluable to know for incident response, while being practically useless for prevention. Budgeting for protecting AI-BOMs needs to take that factor into account. The ML-BOM tooling ecosystem is maturing fast, but it's not where software SBOMs are yet. Tools like Syft and Trivy generate complete software inventories in minutes. ML-BOM tooling is earlier in that curve. Vendors are shipping solutions, but integration and automation still require additional steps and more effort. Organizations starting now may need manual processes to fill gaps. AI-BOMs won't stop model poisoning as that happens during training, often before an organization ever downloads the model.


Power, compute, and sovereignty: Why India must build its own AI infrastructure in 2026

Digital infrastructure decisions made in 2026 will shape India’s technological posture well into the 2040s. Data centers, power systems, and AI platforms are not short-cycle investments; they are multi-decade commitments. In this context, policy clarity becomes a prerequisite for execution rather than an afterthought. Clear, stable frameworks around data governance, AI regulation, cross-border compute flows, and energy integration reduce long-term risk and enable infrastructure to be designed correctly the first time. Ambiguity forces fragmentation capital hesitates, architectures become reactive, and systems are retrofitted instead of engineered. As India accelerates its AI ambitions, predictability in policy will be as important as speed in deployment. ... In India’s context, sovereignty does not imply isolation. It implies resilience. Compliance, data residency, and AI governance cannot be retrofitted into infrastructure after it is built. They must be embedded from inception governing where data resides, how it moves, how workloads are isolated, audited, and secured, and how infrastructure responds to evolving regulatory expectations. Systems designed this way reduce friction for enterprises operating in regulated environments and provide governments with confidence in domestic digital capability. This reality also reframes the role of domestic technology firms. 


Why AI Risk Visibility Is the Future of Enterprise Cybersecurity Strategy

Vulnerabilities arise from two sources: internal infrastructure and third-party tools that companies rely on. Organizations typically have stronger control over internally developed systems. The complexity stems from third-party software that introduces new risks whenever a new version or patch is released. A comprehensive asset inventory is essential for documenting the software and hardware resources in use. Once the enterprise knows what it has, it can evaluate which systems pose the highest risk. Asset management, infrastructure, and information security teams, along with audit functions, all contribute to that assessment. Together, they can determine where remediation must occur first. Cloud service providers are responsible for cloud-based Software as a Service (SaaS) applications. It’s vital, however, for the company to take on data governance and service offboarding responsibilities. Contracts must clearly specify how data is handled, transferred, or destroyed at the end of the relationship. ... Alignment between business and IT leadership is essential. The chief information officer (CIO) approves the IT project kickoff and allocates the required budget and other resources. The business analysis team translates those needs into technical requirements. Quarterly scorecards and governance checkpoints create visibility, enabling leaders to make decisions that balance business outcomes and technical realities.


Why are IT leaders optimistic about future AI governance

IT leaders are optimistic about AI’s transformative potential. This optimism extends to AI governance, where the strategic integration of NHI management enhances security and enables organizations to confidently pursue AI initiatives. It’s essential to ensure that security measures evolve alongside technological advancements, safeguarding AI systems without stifling innovation. ... Can robust security and innovation coexist harmoniously? The answer lies in striking a balance between rigorous security measures and fostering an environment conducive to innovation. Properly managing NHIs equips organizations with the flexibility to innovate while maintaining a fortified security posture. With advancements in artificial intelligence and automation progress, machine identities play an increasingly pivotal role in enabling these technologies. By ensuring that machine interactions are secure and transparent, businesses can confidently explore the transformative potential of AI without compromising on security. Herein lies the essence of responsible AI governance: leveraging data-driven insights to enable ethical and sustainable technological growth while safeguarding against inherent risks. ... What can organizations do to harness the collective expertise of stakeholders? Where cyber threats are increasingly sophisticated, collaboration becomes the cornerstone of a resilient cybersecurity framework. 

No comments:

Post a Comment