Showing posts with label CFO. Show all posts
Showing posts with label CFO. Show all posts

Daily Tech Digest - March 13, 2026


Quote for the day:

“Too many of us are not living our dreams because we are living our fears.” -- Les Brown



🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 20 mins • Perfect for listening on the go.


Agile Without The Chaos: A DevOps Manager’s Playbook

In this article, DevOps Oasis presents a pragmatic strategy for moving beyond "agile theatre" to build sustainable, high-velocity teams. The author contends that true agility is a promise to learn fast and deliver in small slices, rather than a rigid adherence to ceremonies. The playbook details several critical pillars for success: honest planning, refined backlogs, and the integration of operational reality. Instead of over-committing, managers are urged to leave capacity for inevitable interrupts and maintain two distinct horizons—short-term committed work and mid-term shaped bets. A healthy backlog is characterized by a "production-ready" Definition of Done, ensuring code is observable and safe before it is considered finished. Crucially, the guide argues for making on-call duties and incident responses a formal part of the agile lifecycle rather than treating them as disruptive outliers. Performance measurement is also reimagined, shifting from vanity story points to high-trust metrics like lead time, change failure rate, and SLO compliance. By fostering a blameless culture and leveraging automated delivery pipelines as the backbone of agility, DevOps leaders can replace systemic chaos with a calm, outcome-driven environment that prioritizes user value and team well-being.


Engineering Reliability for Compliance-Bound AI Systems

In this article published on the Communications of the ACM (CACM) blog, Alex Vakulov argues that regulated industries require a fundamental shift in AI development, moving from model-centric optimization to system-centric reliability. In sectors like finance, law, and healthcare, statistical accuracy is insufficient because "mostly right" outputs can lead to legal and professional catastrophe. Instead of focusing solely on reducing hallucinations through model tweaks, Vakulov advocates for architectural constraints that bake domain-specific doctrine directly into the software pipeline. This strategy addresses critical failure modes—such as material omission and relevance indiscrimination—by ensuring essential information is prioritized and all assertions remain grounded in traceable sources. By structuring AI systems as constrained pipelines, engineers can enforce non-negotiable requirements like data isolation and regulatory compliance at the retrieval, filtering, and generation layers. This approach treats reliability as a property of bounded behavior rather than just a cognitive feat, ensuring that AI operates within strict legal and safety limits regardless of model variability. Ultimately, the piece calls for an interdisciplinary collaboration to translate professional standards into executable technical constraints, transforming AI from a probabilistic tool into a dependable asset for high-assurance environments.


The Legal and Policy Fallout from Data Center Strikes in the Middle East War

This article by Mahmoud Abuwasel examines the unprecedented military targeting of hyperscale cloud infrastructure, specifically focusing on drone strikes against AWS facilities in the UAE and Bahrain. This incident marks a watershed moment where data centers, traditionally viewed as civilian property, are reclassified as legitimate military targets due to their dual-use nature in hosting both commercial and defense workloads. The author explores a century-old legal precedent, notably the 1923 Cuba Submarine Telegraph Company case, which suggests that private sector entities have little recourse for compensation when their infrastructure is utilized for state military purposes. Furthermore, the piece highlights a "liability trap" for service providers; regional courts often reject force majeure defenses in war zones, placing the financial burden of outages and data loss entirely on the tech companies. As governments enforce strict data localization mandates, they inadvertently concentrate sensitive assets into high-value strike zones, complicating digital sovereignty and disaster recovery. Ultimately, the article warns that this militarization of civilian technology will likely extend into space-based assets, necessitating an urgent overhaul of international policy, insurance frameworks, and geopolitical risk assessments to protect the global digital backbone during times of conflict.

In this article on CIO.com, author Richard Ewing explores the persistent friction between the iterative nature of Agile development and the rigid requirements of traditional corporate finance. The primary conflict stems from a significant "language barrier": while engineering teams prioritize velocity and story points, CFOs focus on capitalization, amortization, and earnings per share. This misalignment often leads to R&D budget cuts because Agile’s continuous delivery model frequently translates to Operating Expenditure (OpEx), which immediately impacts a company's profit and loss statement, rather than Capital Expenditure (CapEx), which can be depreciated over several years. To address this, Ewing suggests that CIOs must move beyond a "trust me" model and instead implement a "capitalization matrix" to translate technical tasks into economic terms. By using "narrative tags" in tools like Jira to explain how refactoring work enhances long-term assets, engineering teams can provide the financial transparency necessary for CFO support. Ultimately, the article argues that for Agile transformations to succeed in an efficiency-driven economy, technical leaders must develop financial fluency, reframing Agile as a predictable driver of sustainable business value rather than an opaque operational cost.


AI agents are the perfect insider

In this article on Techzine, author Berry Zwets highlights a critical emerging threat in cybersecurity: the rise of agentic AI as an autonomous, 24/7 "insider." Unlike human employees, AI agents have persistent access to sensitive corporate data and never sleep, creating a significant blind spot for security teams who fail to specifically monitor them. Helmut Reisinger, CEO EMEA of Palo Alto Networks, warns that the window between a breach and data theft has plummeted from nine days to just over an hour. This acceleration is driven by the speed, scale, and sophistication of "production AI" used by malicious actors. Despite the rapid adoption of AI, only about 6% of global deployments currently include appropriate security measures, leaving many organizations vulnerable to insider risks. To counter this, industry leaders are shifting toward "platformization"—integrating AI runtime security, identity management, and real-time observability to bridge the gaps between fragmented legacy tools. By treating AI agents as privileged machine identities that require continuous inspection and zero-trust verification, enterprises can secure their digital environments against these tireless, high-speed threats. Ultimately, the piece argues that securing the AI runtime is no longer optional but a strategic imperative for the modern, agentic era.


UK Fraud Strategy considers business digital identity and IDV

In a comprehensive new fraud strategy for 2026–2029, the UK government has pledged a substantial investment of over £250 million to combat the evolving landscape of cyber-enabled crime and identity fraud. Recognizing that fraud now accounts for the largest crime type in the UK, the strategy prioritizes the integration of advanced identity verification (IDV) and digital identity frameworks for both individuals and businesses. Central to this initiative is a "Call for Evidence" regarding the communications sector to reduce anonymity and strengthen "Know Your Customer" protocols, alongside the creation of a secure central database for telephone numbers to block fraudulent activity. Furthermore, the government is exploring digital company identities to secure supply chains and will mandate electronic VAT invoicing by 2029 to prevent document interception. To counter the rising threat of AI-generated deepfakes and synthetic media, the Home Office is collaborating with tech departments to develop detection frameworks. By shifting toward an outcomes-based authentication approach and promoting the adoption of passkeys through the UK Digital Identity and Attributes Trust Framework, the strategy aims to align public and private sectors in building a resilient digital environment that protects the economy while fostering trust in modern corporate structures.


How to Scale Phishing Detection in Your SOC: 3 Steps for CISOs

This article on The Hacker News highlights the evolving complexity of modern phishing attacks, which now leverage legitimate infrastructure and encrypted traffic to bypass traditional security layers. To combat these sophisticated threats, Chief Information Security Officers (CISOs) are encouraged to adopt a proactive three-step model focused on speed and behavioral visibility. First, the article emphasizes the importance of safe interaction through interactive sandboxing, allowing analysts to explore malicious redirect chains and credential harvesting pages without risking corporate assets. Second, it advocates for intelligent automation that combines automated execution with human-like interactivity to navigate complex obstacles such as CAPTCHAs and QR codes, significantly increasing investigation throughput. Finally, the piece underscores the necessity of SSL decryption to unmask threats hidden within encrypted HTTPS sessions by extracting encryption keys directly from memory. By implementing these strategies—specifically leveraging tools like ANY.RUN—organizations can achieve up to a threefold increase in SOC efficiency, reduce analyst burnout, and cut Mean Time to Repair (MTTR) by over twenty minutes per case. Ultimately, scaling phishing detection requires moving beyond static indicators to a dynamic, evidence-based approach that uncovers the full attack lifecycle before business impact occurs.


CISO Conversations: Aimee Cardwell

In this SecurityWeek feature, Aimee Cardwell shares her unconventional path from a product management and engineering background into elite cybersecurity leadership. Currently serving as CISO in Residence at Transcend after high-profile roles at UnitedHealth Group and American Express, Cardwell advocates for a leadership style rooted in low ego, deep curiosity, and radical empowerment. She rejects the traditional "general" model of leadership, instead fostering a cohesive team environment where strategy is defined collectively and credit is consistently redirected to individual contributors. A central theme of her philosophy is "customer-obsessed" security, emphasizing that practitioners must act as business enablers who understand the strategic "forest" while managing the tactical "trees." Cardwell also highlights the critical issue of burnout, implementing innovative solutions like "half-day Fridays" to recognize the immense pressure on security teams. Furthermore, she stresses the importance of interdepartmental partnerships with privacy and audit teams to pool resources and align goals. Looking ahead, she identifies AI-generated social engineering as a looming threat, noting that hyper-personalized attacks require a new level of vigilance. By blending technical expertise with human-centric empathy, Cardwell illustrates how contemporary CISOs can protect organizational assets while simultaneously driving a culture of innovation and resilience.


Skills-based cyber talent practices boost retention

This article published by SecurityBrief, highlights groundbreaking research from Women in CyberSecurity (WiCyS) and FourOne Insights. The study, titled The ROI of Resilience, demonstrates that shifting toward skills-based talent management—such as mentorship, personalized learning, and objective skills-based promotions—can save organizations over $125,000 per employee. These practices significantly improve the bottom line by reducing hiring friction and increasing retention by up to 18%. Furthermore, the research reveals that skills-based promotion panels and formal development pathways are linked to a 10% to 20% increase in female representation within cybersecurity leadership roles. Despite these clear financial and operational advantages, the adoption of such methods remains low, with no top-performing practice used by more than 55% of organizations. The report emphasizes that external partnerships with professional organizations can speed up the hiring process by 16% and prevent $70,000 in lost productivity per employee. As AI and automation continue to transform the cybersecurity landscape, the findings argue that workforce resilience is a measurable business advantage rather than a simple HR initiative. Ultimately, the piece calls for a shift away from traditional degree-based filters toward a more agile, skills-informed workforce strategy.


Self-Healing and Intelligent Data Delivery at Scale

In this TDWI article, Dr. Prashanth H. Southekal discusses the limitations of traditional data pipelines in the face of modern data demands characterized by high volume, velocity, and variety. As organizations transition to real-time, distributed architectures, conventional batch-oriented systems often fail, leading to eroded data quality and business trust. To address these challenges, the author introduces self-healing systems as a critical evolution in data management. These systems are designed to continuously observe, detect, and remediate data quality incidents—such as schema drift or missing records—with minimal human intervention. By integrating machine learning and generative AI, self-healing architectures can correlate signals across diverse datasets to identify root causes and proactively anticipate failures before they impact downstream applications. This approach shifts the human role from reactive firefighting to strategic oversight and policy definition. Ultimately, a self-healing framework minimizes data downtime and business risk, transforming data quality from a manual burden into an automated, first-class signal. This paradigm shift ensures that data integrity remains robust even as complexity scales, allowing enterprises to maintain high confidence in their analytical insights and automated workflows.

Daily Tech Digest - February 23, 2026


Quote for the day:

"Prepare, work smarter, Learn from your Mistakes. These are the secret to success!" -- Elizabeth McCormick



What’s wrong (and right) with AI coding agents

“At the scale AI is generating pull requests today, humans simply can’t keep up. You don’t check the accuracy of Excel with an abacus… and in 2026 we shouldn’t expect maintainers to manually inspect machine-speed code without machine-speed assistance,” said Fox. “AI reviews can go deeper than humans in many cases. They don’t get tired, they can reason across large codebases… and they can spot patterns at a scale no individual reviewer can hold in their head. If AI is generating more code, the only viable answer is to use AI to help review and validate it. You have to fight fire with fire.” ... He reminds us that quantity does not always equal quality – especially in the AI-driven world we now live in. He notes that, at least for now, the reality is that AI development tools and ‘vibe coding’ can generate a lot of code very quickly, but code that’s often slower and more memory‑hungry than what a skilled developer would write. ... Although this entire discussion is focused on the now-increasingly-automated command line, it feels like the real focus should be higher and architecture has been mentioned already. “We’re entering a world where, with AI, software changes are propagating faster than governance models can track them. That means AI tools are, plain and simple, accelerating systemic complexity. When an AI agent can generate and deploy changes across interconnected enterprise systems, there’s real danger in the invisible dependencies and downstream effects most orgs can’t fully see,” said Ido Gaver


Identity verification systems are struggling with synthetic fraud

The researchers tied the growth of synthetic identity fraud to the increasing use of AI tools, which can generate convincing fake documents that pass casual inspection. “The biggest risk I see in the next 12 to 18 months is the growing and advancing use of AI. AI is creating fake people, fake voices, and fake documents. Bad actors are using these capabilities to open accounts, take over existing accounts, and impersonate real people in places like bank branches,” Lewis said. ... Financial institutions remain a major target for identity fraud due to access to credit, account funding, and cash movement. A successful fraudster can monetize a single fake or synthetic identity for tens of thousands of dollars before detection, making the sector a frequent target. Online-only retail banks recorded the highest rate of failed identity verification among the financial institution categories in Intellicheck’s dataset. The report also found elevated failure rates across businesses serving underbanked consumers, including check cashing, payday lending, subprime lending, and lease-to-own services. ... AI tools are being used to produce synthetic IDs that are difficult for humans to spot. Lewis said attackers are already using AI and large language models to generate documents that can bypass basic checks. “AI and LLM can create fake ID’s that can easily pass the templating test, old methods don’t work and ID verification service providers can’t rest on their laurels,” Lewis said. 


Neoclouds: Meeting demand for AI acceleration

This surge in demand for AI acceleration has seen a surprising benefactor. According to Tiger Research, cryptocurrency mining firms, seeking to reduce their exposure to bitcoin’s volatile pricing, are redirecting their graphics processing unit (GPU) farms toward AI acceleration applications. ... Before the emergence of neoclouds a few years ago, if an organisation wanted to work with AI, it had no choice but to go to a hyperscaler like Amazon Web Services (AWS) or Google. While the hyperscalers offer AI infrastructure as part of their vast public cloud services portfolio, Roy Illsley, chief analyst at Omdia, says the hyperscalers tend to be expensive and, as he recalls, a few years ago, there was very little choice other than Google’s AI offerings. ... AI infrastructure strategies are becoming inherently hybrid and multicloud by design – not as a by-product of supplier sprawl, but as a deliberate response to workload reality. The cloud market is fragmenting along functional lines, and neoclouds occupy a clear and growing role within that landscape. “Neoclouds started as GPU as a service. If you needed GPUs, these companies bought or leased GPUs from Nvidia, and then they would slice them and sell them off to people in smaller groups and bundles,” says Omdia’s Illsley. However, over time, neocloud providers have added software stacks and developed other services to meet the demand of IT buyers who need GPU power and the software stack required for AI training or AI inferencing.


Sam Altman just said what everyone is thinking about AI layoffs

This isn’t the first time industry stakeholders questioned the veracity of AI-related layoffs. A study by Oxford Economics in January this year claimed most layoffs are due to “more traditional drivers” such as overhiring or poor financial performance. ... "While a rising number of firms are pinning job losses on AI, other more traditional drivers of job layoffs are far more commonly cited,” the report said. “What's more, we suspect some firms are trying to dress up layoffs as a good news story rather than bad news, such as past over-hiring." ... “There’s some real displacement by AI of different kinds of jobs,” he said. “We’ll find new kinds of jobs as we do with every tech revolution. I would expect that the real impact of AI doing jobs in the next few years will begin to be palpable.” Altman’s prediction here aligns with research from Gartner and Forrester on the potential impact of AI on the global jobs market. In January, Forrester predicted 10 million jobs could be lost worldwide as enterprise adoption ramps up. ... Despite a string of studies pointing to the contrary, some tech industry figures still believe that AI will eventually render some workers obsolete. In a recent interview with the Financial Times, for example, Microsoft AI CEO Mustafa Suleyman insisted AI will begin replacing “white collar” workers within 18 months. “I think we’re going to have a human-level performance on most if not all professional tasks,” Suleyman told


Jailbreaking the matrix: How researchers are bypassing AI guardrails to make them safer

As AI assistants move from novelty to infrastructure, helping write code, summarizing medical notes and answering customer questions, the biggest question isn't just what these systems can do, but what happens when they are pushed to do what they shouldn't. "By showing exactly how these defenses break, we give AI developers the information they need to build defenses that actually hold up," Jha said. "The public release of powerful AI is only sustainable if the safety measures can withstand real scrutiny, and right now, our work shows that there's still a gap. We want to help close it." ... Focusing on the internal workings of the LLM allows more accurate measurements of failures while encouraging the development of more robust defenses against the failure of safety measures. According to the researchers, HMNS can help reveal whether specific internal pathways, if exploited, could cause a breakdown. That information can guide stronger training, monitoring and defense strategies. ... Understanding the security shortcomings of LLMs is critical as they become more widespread. Companies like Meta, Alibaba and others have released powerful AI models that are available to anyone. While each platform incorporates safety layers meant to keep it from being misused, the UF team has found that those safety layers can be systematically bypassed.


Plan vs. planning: Why continuous planning must traverse time

The problem is not the plan’s quality. The problem is that a plan freezes a moment in time while the organization continues to move through time. Planning, by contrast, must be a continuous discipline, remaining active as assumptions decay, signals emerge and constraints shift. ... Planning exists to test those assumptions continuously, a distinction long recognized in leadership and management literature that separates planning as an ongoing discipline from planning as a static artifact. Plans are optimized for agreement and commitment. Planning is optimized for learning, decision-making and managing consequences in the face of uncertainty. In practice, this means consequences must be visible at the moment of decision, not discovered months later through execution. ... Many enterprises optimize for compliance, predictability and approval at the expense of feedback and adaptation. Learning is pushed downstream, arriving only after outcomes are locked in and costs incurred. Systems theorist Russell Ackoff described this dynamic clearly: “Most organizations are not short of information. They are short of the ability to learn from it.” Continuous planning restores learning by design, not as postmortem analysis, but as pre-decision feedback. Feedback that arrives before commitment changes behavior. Feedback that arrives after execution becomes an explanation. In volatile environments, that timing difference is decisive, which is why scenario planning and structured foresight have re-emerged as critical executive tools.


The rise of AI factories: Powering an era of pervasive intelligence

In India alone, Google is building a gigawatt-scale AI hub in Visakhapatnam. Microsoft is expanding its cloud and AI footprint in Pune and Chennai and creating a new “India South Central” region in Hyderabad. In partnership with NVIDIA, Reliance Jio is developing a major AI data center in Jamnagar for nationwide GPU-as-a-service offerings. TCS is planning a 1-gigawatt AI data center, likely in Gujarat or Maharashtra, to support startups, hyperscalers, and government institutions. And as part of its Stargate project, OpenAI is actively scouting locations in India for what could become one of the largest AI data centers in all of Asia. ... The growth of AI represents a fundamental transformation in how the world builds and operates computing infrastructure. While traditional data centers are designed for general-purpose workloads, AI superclusters are purpose-built facilities that function as industrial-scale intelligence production systems. And their output is defined by new metrics — most notably tokens per watt and tokens per dollar — that quantify the efficiency and productivity of intelligence at scale. ... To deliver the performance at scale that AI requires, silicon designers are increasingly turning to multi-die designs, including 3D integrated circuits (3DIC) and chiplet-based architectures. While these chip designs offer gains that traditional monolithic SoCs cannot achieve cost-effectively, they also introduce significant complexity to the design process.


Cognizant CAIO Babak Hodjat explains how Agentic AI will transform enterprises

One of the things that agentic systems do is they allow for a diversity of data sources because you can actually have an agent responsible for a data source talking to other agents responsible for other data sources. Your interface into this system could be a consolidation of information and decisions that come from these disparate sources. It is the first time that we can actually have a mapping between intent and disparate sources of data and applications. I think that will work well. That kind of design can work well in a country like India with such diversity of data. ... Population-based approaches like genetic algorithms are very good at non-linear optimisation, especially if you are looking at multiple outcomes at the same time. Pretty much every problem that we look at is multi-objective. Every problem that we look at has improved revenue but reduced costs. You look at curing disease but reduce impact on the economy. It is always more than one outcome that we are looking at. In problems like optimisation of power grids or managing urban traffic systems, these are very well-suited algorithms. ... There are two opposing forces when it comes to AI. Scaling laws mean that building bigger is more powerful, and building bigger typically means using more energy. Many companies are looking at green sources for that additional consumption. On the other hand, companies are optimising models to be smaller and less energy-hungry. For multi-agent systems, smaller models can be more cost-effective and greener.


Inference Becomes the Next AI Chip Battleground

Inference has fundamentally different economics and performance requirements than training, said Karl Freund, founder and principal analyst at Cambrian AI Research. Training AI models is a cost center, while inference is a “profit center” that directly generates revenue. Freund and Kimball noted that while GPUs deliver excellent performance, they often carry architectural features optimized for training that don’t always translate to lower latency or higher efficiency in pure inference use cases. Purpose-built inference chips – ASICs and other accelerators – can deliver faster responses, improved energy efficiency, and lower total cost of ownership. ... "As inference workloads exceed the total amount of training workloads in terms of token output, there will be a greater need for diversity because alternative XPU architectures can achieve better efficiency on some specific inferencing tasks,” said Brendan Burke, research director of semiconductors, supply chain, and emerging tech at Futurum Group. ... Inference opportunities span data centers and the edge, and requirements vary widely by workload and deployment. “The inference you do in your autonomous vehicle is far different than the inferencing you do when you’re an online customer service bot,” Kimball said. ... Analysts expect Nvidia to maintain dominance in both training and inference, but diverse requirements create space for specialized solutions to capture share. 


Why the CFO's Playbook Belongs on Every CIO's Desk

Recent research from Gartner on how CFOs are allocating budgets gives CIOs insight into what priorities look like across departments, and where technology and AI can help move the needle. The research firm's CFO Report: Q1 2026 finds that while budgets are shifting and AI ambitions are high, enterprise-wide AI success remains an aspiration rather than a reality. ... AI is also changing the conversation on ROI for both finance and technology leaders. "There's a lot more to evaluating the success of some of this investment in technology than simply just ROI, and AI is definitely helping change that," Abbasi said. "AI isn't your traditional asset." Unlike standard hardware expenditures, AI investments don't have predictable depreciation curves, and the ways in which returns on AI investment may show up across the business can vary. They may manifest in time to market, customer satisfaction or competitive positioning, not just in cost savings, Abbasi said. CIOs should be sure to articulate how AI will generate strategic returns rather than focus on pitching it as a capital project. "It changes the way you measure the effectiveness of AI, as well as how you measure your business more holistically," he said. "It's not like a traditional asset because you don't necessarily know what the outcomes are going to be for some of these AI projects."

Daily Tech Digest - January 26, 2026


Quote for the day:

"When I finally got a management position, I found out how hard it is to lead and manage people." -- Guy Kawasaki



Stop Choosing Between Speed and Stability: The Art of Architectural Diplomacy

In contemporary business environments, Enterprise Architecture (EA) is frequently misunderstood as a static framework—merely a collection of diagrams stored digitally. In fact, EA functions as an evolving discipline focused on effective conflict management. It serves as the vital link between the immediate demands of the present and the long-term, sustainable objectives of the organization. To address these challenges, experienced architects employ a dual-framework approach, incorporating both W.A.R. and P.E.A.C.E. methodologies. At any given moment, an organization is a house divided. On one side, you have the product owners, sales teams, and innovators who are in a state of perpetual W.A.R. (Workarounds, Agility, Reactivity). They are facing the external pressures of a volatile market, where speed is the only currency and being "first" often trumps being "perfect." To them, architecture can feel like a roadblock—a series of bureaucratic "No’s" that stifle the ability to pivot. On the other side, you have the operations, security, and finance teams who crave P.E.A.C.E. (Principles, Efficiency, Alignment, Consistency, Evolution). They see the long-term devastation caused by unchecked "cowboy coding" and fragmented systems. They know that without a foundation of structural integrity, the enterprise will eventually collapse under the weight of its own complexity, turning a fast-moving startup into a sluggish, expensive legacy giant.


Why Identity Will Become the Ultimate Control Point for an Autonomous World in 2026

The law of unintended consequences will dominate organisational cybersecurity in 2026. As enterprises increase their reliance on autonomous AI agents with minimal human oversight, and as machine identities multiply, accountability will blur. The constant tension between efficiency and security will fuel uncontrolled privilege sprawl forcing organisations to innovate not only in technology, but in governance. ... Attackers will exploit this shift, embedding malicious prompts and compromising automated pipelines to trigger actions that bypass traditional controls. Conventional privileged access management and identity access management will no longer be sufficient. Continuous monitoring, adaptive risk frameworks, and real-time credential revocation will become essential to manage the full lifecycle of AI agents. At the same time, innovation in governance and regulation will be critical to prevent a future defined by “runaway” automation. Two years after NIST released its first AI Risk Management Framework, the framework remains voluntary globally, and adoption has been inconsistent since no jurisdiction mandates it. Unless governance becomes a requirement not just a guideline, organisations will continue to treat it as a cost rather than a safeguard. Regulatory frameworks that once focused on data privacy will expand to cover AI identity governance and cyber resilience, mandating cross-region redundancy and responsible agent oversight.


The human paradox at the center of modern cyber resilience

The problem for security leaders is that social engineering is still the most effective way to bypass otherwise robust technical controls. The problem is becoming more acute as threat actors increasingly use AI to deliver compelling, personalized, and scalable phishing attacks. While many such incidents never reach public attention, an attempt last year to defraud WPP used AI-generated video and voice cloning to impersonate senior executives in a highly convincing deepfake meeting. Unfortunately, the risks don’t end there. Even with strong technical controls and a workforce alert to social engineering tactics, risk also comes from employees who introduce tools, devices or processes that fall outside formal IT governance. ... What’s needed instead is a shift in both mindset and culture, where employees understand not just what not to do, but why their day-to-day decisions, which tools they trust, how they handle unexpected requests, and when they choose to slow down and double check something rather than act on instinct genuinely matter. From a leadership perspective, it’s much better to foster a culture which people feel comfortable reporting suspicious activity without fear of blame, rather than an environment where taking the risk feels like the easier option. ... Instead of acting quickly to avoid delaying work, the employee pauses because the culture has normalized slowing down when something seems unusual. They also know exactly how to report or verify because the processes are familiar and straightforward, with no confusion about who to contact or whether they’ll be blamed for raising a false alarm.


Is cloud backup repatriation right for your organization?

Cost is, without a doubt, one of the major reasons for repatriation. Cloud providers have touted the affordability of the cloud over physical data storage, but getting the most bang for your buck from using the cloud requires due diligence to keep costs down. Even major corporations struggle with this issue. The bigger the environment, the more complex it is to accurately model and cost, particularly with multi-cloud environments. And as we know, cloud is incredibly easy to scale up. Keeping with our data theme, understanding the costing model of data backup and bringing back data from deep storage is extremely expensive when done in bulk. Software must be expertly tuned to use the provider storage tier stack efficiently, or massive costs can be incurred. On-premises, the storage costs are already sunk. The data is also local (assuming local backup with remote replication for offsite backup,) so restoring data and services happens quicker. ... Straight-up backup to the cloud can be cheaper and more effective than on-site backups. It also passes a good portion of the management overhead to the cloud provider, such as hardware support, general maintenance and backup security. As we discussed, however, putting backups in another provider's hands might mean longer response and recovery times. Smaller businesses often have an immature environment and cloud backup can be a boon, but larger businesses might consider repatriation if the infrastructure for on-site is available. 


Who Approved This Agent? Rethinking Access, Accountability, and Risk in the Age of AI Agents

AI agents are different. They operate with delegated authority and can act on behalf of multiple users or teams without requiring ongoing human involvement. Once authorized, they are autonomous, persistent, and often act across systems, moving between various systems and data sources to complete tasks end-to-end. In this model, delegated access doesn’t just automate user actions, it expands them. Human users are constrained by the permissions they are explicitly granted, but AI agents are often given broader, more powerful access to operate effectively. As a result, the agent can perform actions that the user themselves was never authorized to take. ... It’s no wonder existing IAM assumptions break down. IAM assumes a clear identity, a defined owner, static roles, and periodic reviews that map to human behavior. AI agents don’t follow those patterns. They don’t fit neatly into user or service account categories, they operate continuously, and their effective access is defined by how they are used, not how they were originally approved. Without rethinking these assumptions, IAM becomes blind to the real risk AI agents introduce. ... When agents operate on behalf of individual users, they can provide the user access and capabilities beyond the user’s approved permissions. A user who cannot directly access certain data or perform specific actions may still trigger an agent that can. The agent becomes a proxy, enabling actions the user could never execute on their own. These actions are technically authorized - the agent has valid access. However, they are contextually unsafe.


The CISO’s Recovery-First Game Plan

CISOs must be on top of their game to protect an organization’s data. Lapses in cybersecurity around the data infrastructure can be devastating. Therefore, securing infrastructure needs to be air-tight. The “game plan” that leads a CISO to success must have the following elements: Immutable snapshots; Logical air-gapping; Fenced forensic environment;  Automated cyber protection; Cyber detection; and Near-instantaneous recovery. These six elements constitute the new wave in protecting data: next-generation data protection. There has already been a shift from modern data protection to this substantially higher level of next-gen data protection. A smart CISO would not knowingly leave their enterprise weaker. This is why adoption of automated cyber protection and cyber detection, built right into enterprise storage infrastructure, is increasing, as part of this move to next-gen data protection. Automated cyber protection and cyber detection are becoming a basic requirement for all enterprises that want to eliminate the impact of cyberattacks. All of this is vital for the rapid recovery of data within an enterprise after a cyberattack. ... But what would be smart for CISOs to do is to make adjustments based on what they currently have protecting their storage infrastructure. For example, even in a mixed storage environment, you can deploy automated cyber protection through software. You don’t need to rip and replace the cybersecurity systems and applications that you already have in place. 


ICE’s expanding use of FRT on minors collides with DHS policy, oversight warnings, law

At the center of the case is DHS’s use of Mobile Fortify, a field-deployed application that scans fingerprints and performs facial recognition, then compares collected data against multiple DHS databases, including CBP’s Traveler Verification Service, Border Patrol systems, and Office of Biometric Identity Management’s Automated Biometric Identification System. The complaint alleges DHS launched Mobile Fortify around June 2025 and has used it in the field more than 100,000 times since launch. Unlike CBP’s traveler entry-exit facial recognition program in which U.S. citizens can decline participation and consenting citizens’ photos are retained only until identity verification, Mobile Fortify is not restricted to ports of entry and is not meaningfully limited as to when, where, or from whom biometrics may be taken. The lawsuit cites a DHS Privacy Threshold Analysis stating that ICE agents may use Mobile Fortify when they “encounter an individual or associates of that individual,” and that agents “do not know an individual’s citizenship at the time of initial encounter” and use Mobile Fortify to determine or verify identity. The same passage, as quoted in the complaint, authorizes collection in identifiable form “regardless of citizenship or immigration status,” acknowledging that a photo captured could be of a U.S. citizen or lawful permanent resident.


From Incident to Insight: How Forensic Recovery Drives Adaptive Cyber Resilience

The biggest flaw is that traditional forensics is almost always reactive, and once complete, it ultimately fails to deliver timely insights that are vital to an organization. For example, analysts often begin gathering logs, memory dumps, and disk images only after a breach has been detected, by which point crucial evidence may be gone. Further compounding matters is the fact that the process is typically fragmented, with separate tools for endpoint detection, SIEM, and memory analysis that make it harder to piece together a coherent narrative. ... Modern forensic approaches capture evidence at the first sign of suspicious activity — preserving memory, process data, file paths, and network activity before attackers can destroy them. The key is storing artifacts securely outside the compromised environment, which ensures their integrity and maintains the chain of custody. The most effective strategies operate on parallel tracks. The first is dedicated to restoring operations and delivering forensic artifacts, while the other begins immediate investigations. By integrating forensic, endpoint, and network evidence collection together, silos and blind spots are replaced with a comprehensive and cohesive picture of the incident. ... When integrated into the incident response process, forensic recovery investigations begin earlier, compliance reporting is backed by verifiable facts, and legal defenses are equipped with the necessary evidence. 


Memgraph founder: Don’t get too loose with your use of MCP

“It is becoming almost universally accepted that without strong curation and contextual grounding, LLMs can misfire, misuse tools, or behave unpredictably. Let me clarify what I mean by ‘tool’ i.e. external capabilities provided to the LLM, ranging from search, calculations and database queries to communication, transaction execution and more, with each exposed as an action or API endpoint through MCP.” ... “But security isn’t actually the main possible MCP stumbling block. Perversely enough, by giving the LLM more capabilities, it might just get confused and end up charging too confidently down a completely wrong path,” said Tomicevic. “This problem mirrors context-window overload: too much information increases error rates. Developers still need to carefully curate the tools their LLMs can access, with best practice being to provide only a minimal, essential set. For more complex tasks, the most effective approach is to break them into smaller subtasks, often leveraging a graph-based strategy.” ... The truth that’s coming out of this discussion might lead us to understand that the best of today’s general-purpose models, like those from OpenAI, are trained to use built-in tools effectively. But even with a focused set of tools, organisations are not entirely out of the woods. Context remains a major challenge. Give an LLM a query tool and it runs queries; but without understanding the schema or what the data represents, it won’t generate accurate or meaningful queries.


Speaking the Same Language: Decoding the CISO-CFO Disconnect

On the surface, things look good: 88% of security leaders believe their priorities match business goals, and 55% of finance leaders view cybersecurity as a core strategic driver. However, the conviction is shallow. ... For CISOs, the report is a wake-up call regarding their perceived business acumen. While security leaders feel they are working hard to protect the organization, finance remains skeptical of their execution. The translation gap: Only 52% of finance leaders are "very confident" that their security team can communicate business impact clearly. Prioritization doubts: Just 43% of finance leaders feel very confident that security can prioritize investments based on actual risk. Strategy versus operations: Only 40% express full confidence in security's ability to align with business strategy. ... Chief Financial Officers are increasingly taking responsibility for enterprise risk management and cyber insurance, yet they feel they are operating with incomplete data. Efficiency concerns: Only 46% of finance leaders are very confident that security can deliver cost-efficient solutions. Perception of value: CFOs are split, with 38% viewing cybersecurity as a strategic enabler, while another 38% still view it as a cost center. ... "When security is done right, it doesn't slow the business down—it gives leadership the confidence to move faster. And to do that, you have to be able to connect with your CFO and COO through stories. Dashboards full of red, yellow, and green don't help a CFO," said Krista Arndt, 

Daily Tech Digest - January 22, 2026


Quote for the day:

"Lost money can be found. Lost time is lost forever. Protect what matters most." -- @ValaAfshar



PTP is the New NTP: How Data Centers Are Achieving Real-Time Precision

Precision Time Protocol (PTP) – an approach that is more complex to implement but worth the extra effort, enabling a whole new level of timing synchronization accuracy. ... Keeping network time in sync is important on any network. But it’s especially critical in data centers, which are typically home to large numbers of network-connected devices, and where small inconsistencies in network timing could snowball into major network synchronization problems. ... NTP works very well in situations where networks can tolerate timing inconsistencies of up to a few milliseconds (meaning thousandths of a second). But beyond this, NTP-based time syncing is less reliable due to limitations ... Unlike NTP, PTP doesn’t rely solely on a server-client model for syncing time across networked devices. Instead, it uses time servers in conjunction with a method called hardware timestamping on client devices. Hardware timestamping involves specialized hardware components, usually embedded in network interface cards (NICs), to track time. Central time servers still exist under PTP. But rather than having software on servers connect to the time servers, hardware devices optimized for the task do this work. The devices also include built-in clocks, allowing them to record time data faster than they could if they had to forward it to the generic clock on a server.


Why AI adoption requires a dedicated approach to cyber governance

Today enterprises are facing unprecedented internal pressure to adopt AI tools at speed. Business units are demanding AI solutions to remain competitive, drive efficiency, and innovate faster. But existing cyber governance and third-party risk management processes were never designed to operate at this pace. ... Without modernized cyber governance and AI-ready risk management capabilities, organizations are forced to choose between speed and safety. To truly enable the business, governance frameworks must evolve to match the speed, scale, and dynamism of AI adoption – transforming security from a gatekeeper into a business enabler. ... What’s more, compliance doesn’t guarantee security. DORA, NIS2, and other regulatory frameworks set only minimum requirements and rely on reporting at specific points in time. While these reports are accurate when submitted, they capture only a snapshot of the organization’s security posture, so gaps such as human errors, legacy system weaknesses, or risks from fourth- and Nth-party vendors can still emerge afterward. What’s more, human weakness is always present, and legacy systems can fail at crucial moments. ... While there’s no magic wand, there are tried-and-tested approaches that resolve and mitigate the risks of AI vendors and solutions. Mapping the flow of data around the organization helps reveal how it’s used and resolve blind spots. Requiring AI tools to include references for their outputs ensures that risk decisions are trustworthy and reliable.


What CIOs get wrong about integration strategy and how to fix it

As Gartner advises, business and IT should be equal partners in the definition of integration strategy, representing a radical departure from the traditional IT delivery and business “project sponsorship” model. This close collaboration and shared accountability result in dramatically higher success rates ... A successful integration strategy starts by aligning with the organization’s business drivers and strategic objectives while identifying the integration capabilities that need to be developed. Clearly defining the goals of technology implementation, establishing governance frameworks and decision-making authority and setting standards and principles to guide integration choices are essential. Success metrics should be tied to business outcomes, and the integration approach should support broader digital transformation initiatives. ... Create cross-functional data stewardship teams with authority to make binding decisions about data standards and quality requirements. Document what data needs to be shared between systems, which applications are the “source of truth.” Define and document any regulatory or performance requirements to guide your technical planning. ... Integrations that succeed in production are designed with clear system-of-record rules, traceable transactions, explicit recovery paths and well-defined operational ownership. Preemptive integration is not about reacting faster — it’s about ensuring failures never reach the business.


CFOs are now getting their own 'vibe coding' moment thanks to Datarails

For the modern CFO, the hardest part of the job often isn't the math—it's the storytelling. After the books are closed and the variances calculated, finance teams spend days, sometimes weeks, manually copy-pasting charts into PowerPoint slides to explain why the numbers moved. ... Datarails’ new agents sit on top of a unified data layer that connects these disparate systems. Because the AI is grounded in the company’s own unified internal data, it avoids the hallucinations common in generic LLMs while offering a level of privacy required for sensitive financial data. "If the CFO wants to leverage AI on the CFO level or the organization data, they need to consolidate the data," explained Datarails CEO and co-founder Didi Gurfinkel in an interview with VentureBeat. By solving that consolidation problem first, Datarails can now offer agents that understand the context of the business. "Now the CFO can use our agents to run analysis, get insights, create reports... because now the data is ready," Gurfinkel said. ... "Very soon, the CFO and the financial team themselves will be able to develop applications," Gurfinkel predicted. "The LLMs become so strong that in one prompt, they can replace full product runs." He described a workflow where a user could simply prompt: "That was my budget and my actual of the past year. Now build me the budget for the next year."


The internet’s oldest trust mechanism is still one of its weakest links

Attackers continue to rely on domain names as an entry point into enterprise systems. A CSC domain security study finds that large organizations leave this part of their attack surface underprotected, even as attacks become more frequent. ... Large companies continue to add baseline protections, though adoption remains uneven. Email authentication shows the most consistent improvement, driven by phishing activity and regulatory pressure. Organizations still leave email domains partially protected, which allows spoofing to persist. Other protections see much slower uptake. ... Consumer oriented registrars tend to emphasize simplicity and cost. Organizations that rely on them often lack access to protections that limit the impact of account compromise or social engineering. Risk increases as domain portfolios grow and change. ... Brand impersonation through domain spoofing remains widespread. Lookalike domains tied to major brands are often owned by third parties. Some appear inactive while still supporting email activity. Inactive domains with mail records allow attackers to send phishing messages that appear associated with trusted brands. Others are parked with advertising networks or held for later use. A smaller portion hosts malicious content, though dormant domains can be activated quickly. ... Gaps appear in infrastructure related areas. DNS redundancy and registry lock adoption lag, and many unicorns rely on consumer grade registrars. These limitations become more pronounced as operations scale.


Misconfigured demo environments are turning into cloud backdoors to the enterprise

Internal testing, product demonstrations, and security training are critical practices in cybersecurity, giving defenders and everyday users the tools and wherewithal to prevent and respond to enterprise threats. However, according to new research from Pentera Labs, when left in default or misconfigured states, these “test” and “demo” environments are yet another entry point for attackers — and the issue even affects leading security companies and Fortune 500 companies that should know better. ... After identifying an exposed instance of Hackazon, a free, intentionally vulnerable test site developed by Deloitte, during a routine cloud security assessment for a client, Yaffe performed a five-step hunt for exposed apps. His team uncovered 1,926 “verified, live, and vulnerable applications,” more than half of which were running on enterprise-owned infrastructure on AWS, Azure, and Google Cloud platforms. They then discovered 109 exposed credential sets, many accessible via a low-priority lab environment, tied to overly-privileged identity access management (IAM) roles. These often granted “far more access” than a ‘training’ app should, Yaffe explained, and provided attackers:Administrator-level access to cloud accounts, as well as full access to S3 buckets, GCS, and Azure Blob Storage; The ability to launch and destruct compute resources and read and write to secrets managers; Permissions to interact with container registries where images are stored, shared, and deployed.


Cyber Insights 2026: API Security – Harder to Secure, Impossible to Ignore

“We’re now entering a new API boom. The previous wave was driven by cloud adoption, mobile apps, and microservices. Now, the rise of AI agents is fueling a rapid proliferation of APIs, as these systems generate massive, dynamic, and unpredictable requests across enterprise applications and cloud services,” comments Jacob Ideskog ... The growing use of agentic AI systems and the way they act autonomously, making decisions and triggering workflows, is ballooning the number of APIs in play. “It isn’t just ‘I expose one billing API’,” he continues, “now there are dozens of APIs that feed data to LLMs or AI agents, accept decisions from AI agents, facilitate orchestration between services and micro-apps, and potentially expose ‘agentic’ endpoints ... APIs have been a major attack surface for years – the problem is ongoing. Starting in 2025 and accelerating through 2026 and beyond, the rapid escalation of enterprise agentic AI deployments will multiply the number of APIs and increase the attack surface. That alone suggests that attacks against APIs will grow in 2026. But the attacks themselves will scale and be more effective through adversaries’ use of their own agentic AI. Barr explains: “Agentic AI means that bad actors can automate reconnaissance, probe API endpoints, chain API calls, test business-logic abuse, and execute campaigns at machine scale. Possession of an API endpoint, particularly a self-service, unconstrained one, becomes a lucrative target. And AI can generate payloads, iterate quickly, bypass simple heuristics, and map dependencies between APIs.”


Complex VoidLink Linux Malware Created by AI

An advanced cloud-first malware framework targeting Linux systems was created almost entirely by artificial intelligence (AI), a move that signals significant evolution in the use of the technology to develop advanced malware. VoidLink — comprised of various cloud-focused capabilities and modules and designed to maintain long-term persistent access to Linux systems — is the first case of wholly original malware being developed by AI, according to Check Point Research, which discovered and detailed the malware framework last week. While other AI-generated malware exists, it's typically "been linked to inexperienced threat actors, as in the case of FunkSec, or to malware that largely mirrored the functionality of existing open-source malware tools," ... The malware framework, linked to a suspected, unspecified Chinese actor, includes custom loaders, implants, rootkits, and modular plug-ins. It also automates evasion as much as possible by profiling a Linux environment and intelligently choosing the best strategy for operating without detection. Indeed, as Check Point researchers tracked VoidLink in real time, they watched it transform quickly from what appeared to be a functional development build into a comprehensive, modular framework that became fully operational in a short timeframe. However, while the malware itself was high-functioning out of the gate, VoidLink's creator proved to be somewhat sloppy in their execution.


What’s causing the memory shortage?

Right now, the industry is suffering the worst memory shortage in history, and that’s with three core suppliers: Micron Technology, SK Hynix, and Samsung. TrendForce, a Taipei-based market researcher that specializes in the memory market, recently said it expects average DRAM memory prices to rise between 50% and 55% this quarter compared to the fourth quarter of 2025. Samsung recently issued a similar warning. So what caused this? Two letters: AI. The rush to build AI-oriented data centers has resulted in virtually all of the memory supply being consumed by data centers. AI requires massive amounts of memory to process its gigantic data sets. A traditional server would usually come with 32 GB to 64 GB of memory, while AI servers have 128 GB or more. ... There are other factors at play here, too, of course. The industry is in a transition period between DDR4 and DDR5, as DDR5 comes online and DDR4 fades away. These transitions to a new memory format are never quick or easy, and it usually take years to make a full shift. There has also been increased demand from both client and server sides. With Microsoft ending support for Windows 10, a whole lot of laptops are being replaced with Windows 11 systems, and new laptops come with DDR5 memory — the same memory used in an AI server. ... “What’s likely to happen, from a market perspective, is we’ll see the market grow less in ’26 than we had anticipated, but ASPs are likely to stay or increase. ...” he said.


OpenAI CFO Comments Signal End of AI Hype Cycle

By focusing on “practical adoption,” OpenAI can close the gap between what AI now makes possible and how people, companies, and countries are using it day to day. “The opportunity is large and immediate, especially in health, science, and enterprise, where better intelligence translates directly into better outcomes,” she noted. “Infrastructure expands what we can deliver,” she continued. “Innovation expands what intelligence can do. Adoption expands who can use it. Revenue funds the next leap. This is how intelligence scales and becomes a foundation for the global economy.” The framing reflects a shift from big-picture AI promise to day-to-day deployment and measurable results. ... There’s also a gap between what AI can do and how people are actually using it in daily life, noted Natasha August, founder of RM11, a content monetization platform for creators in Carrollton, Texas. “AI tools are incredibly powerful, but for many people and businesses, it’s still unclear how to turn that power into something practical like saving time, making money, or improving how they work,” she told TechNewsWorld. In business, the gap lies between AI’s raw analytical capabilities and its ability to drive tangible, repeatable business outcomes, maintained Nithin Mummaneni ... “The winning play is less ‘AI that answers’ and more ‘AI that completes tasks safely and predictably,'” he continued. “Adoption happens when AI becomes part of the workflow, not a separate destination.”

Daily Tech Digest - October 14, 2025


Quote for the day:

"What you get by achieving your goals is not as important as what you become by achieving your goals." -- Zig Ziglar


Know your ops: Why all ops lead back to devops

When you see more terms that include the “ops” suffix, you should understand them as ideas that, as Graham Krizek, CEO of Voltage, puts it, “represent different layers of the same overarching goal. These concepts are not isolated silos but overlapping practices that support automation, collaboration, and scalability.” ... While site reliability engineering (SRE) and infrastructure as code (IaC) don’t have “ops” attached to their names, they can be seen in many ways as offshoots of the devops movement. SRE applies software engineering techniques to operations problems, with an emphasis on service-level objectives and error budgets. IaC shops manage and provision infrastructure using machine-readable definition files and scripts that can be version-controlled, automated, and tested just like application code. IaC underpins devops, gitops, and many specialized ops practices. ... “While it is not necessary for every IT professional to master each one individually, understanding the principles behind them is essential for navigating modern infrastructure,” he says. “The focus should remain on creating reliable systems and delivering value, not simply keeping up with new terminology.” In other words: you don’t need to collect ops like trading cards. You need to understand the fundamentals, specialize where it makes sense, and ignore the rest. Start with devops, add security if your compliance requirements demand it, and adopt cloudops practices if you’re heavily in the cloud. 


Digital Trust as a Strategic Asset: Why CISOs Must Think Like CFOs

CFOs are great at framing problems in terms of money. CISOs must also figure out how much risks cost, what not taking action costs, how much revenue loss comes from median dwell time, and how much it will cost to recover. Boards want the truth, not spin. Translate technical metrics into business impact (e.g., how detection/response times and dwell time drive incident scope and recovery costs). Recent threat reports show global median dwell time has fallen to ~10 days, but impact still depends on speed of containment. ... Stop talking about technology. Start describing cybersecurity as keeping your business running, protecting your reputation and building consumer trust – not simply operational disruption, but also how risk scenarios affect P&Ls. ... CISOs need to know how to read trust balance sheets, not simply logs. This entails being able to understand risk economics, insurance models and how to allocate resources strategically. ... We are entering a new era in which CFOs and CISOs are both responsible for keeping the business running: Earnings calls that include integrated trust measures;  Cyber insurance coverage that is in line with active threat modeling; Cyber posture reports that meet regulatory standards, like financial audits; and Shared leadership on risk and value initiatives at the board level. CISOs who understand trust economics will impact the futures of businesses by making security a part of strategy as well as operations.


Five actions for CISOs to manage cloud concentration risks

To effectively mitigate concentration risks, CISOs should start by identifying and documenting both third-party and fourth-party risks, with a focus on the most critical cloud providers. It is important to recognize that some non-cloud products may also have cloud dependencies, such as management consoles or reporting engines. Collaborating closely with strategic procurement and vendor management (SPVM) leaders ensures that each cloud provider has a clearly documented owner who understands their responsibilities. ... CISOs should not rely solely on service level agreements (SLAs) to mitigate financial losses from outages, as SLA payouts are often insufficient. Instead, focus on designing applications to gracefully manage limited failures and use cloud-native resilience patterns. In IaaS and PaaS, focus on short-term failure of some cloud services first, rather than catastrophic failure of a large provider and use cloud-native resilience patterns in your architecture. In addition, special attention should be given to cloud identity providers due to their position as a large single point of failure. ... To reduce the risk associated with single-vendor dependency, organizations should intentionally distribute applications and workloads across at least two cloud providers. While single-vendor solutions can simplify integration and sourcing, a multi-cloud approach limits the potential impact of an issue affecting any one provider.


Your cyber risk problem isn’t tech — it’s architecture

The development of a risk culture — including appetite, tolerance and profile — within the scope of the management program is essential to provide real visibility into ongoing risks, how they are being perceived and mitigated, and to leverage the organization’s ability to improve its security posture. Consequently, the company begins to deliver reliable products to customers, secure its reputation and build a secure image to achieve a competitive advantage and brand recognition. ... Another important factor to be developed in parallel with raising risk culture is the continuous Information security awareness process. This action should include all employees, especially those involved in Incident Management and cyber Resilience. ... From a technical standpoint, it is important to select and implement appropriate controls from the NIST CSF stages: Identify, Protect, Detect, Respond and Recover. However, the selection of each control for building guardrails will depend on the overall cybersecurity big picture and market best practices. For each identified issue, the corresponding control must be determined, each monitored by the three lines of defense ... Finally, the cyber management program must also consider legal, regulatory and regional requirements, including privacy and cybersecurity laws. This covers LGPD, CCPA, GDPR, FFEIC, Central Bank regulations, etc., to understand the consequences of non-compliance, which can pose serious issues for the organization.


Even the best AI agents are thwarted by this protocol - what can be done

An emerging category of artificial intelligence middleware known as Model Context Protocol is meant to make generative AI programs such as chatbots bots more powerful by letting them connect with various resources, including packaged software such as databases. Multiple studies, however, reveal that even the best AI models struggle to use Model Context Protocol. ... Having a standard does not mean that an AI model, whose functionality includes a heavy dose of chance ("probability" in technical terms), will faithfully implement MCP. An AI model plugged into MCP has to generate output that achieves several things, such as formulating a plan to answer a query by choosing which external resources to access, in what order to contact the MCP servers that lead to those external applications, and then structuring several requests for information to produce a final output to answer the query. ... The immediate takeaway from the various benchmarks is that AI models need to adapt to a new epoch in which using MCP is a challenge. AI models may have to evolve in new directions to fulfill the challenge. All three studies identify a problem: Performance degrades as the AI models have to access more MCP servers. The complexity of multiple resources starts to overwhelm even the models that can best plan what steps to take at the outset. As Wu and team put it in their MCPMark paper, the complexity of all those MCP servers strains any AI model's ability to keep track of it all.


Chaos engineering on Google Cloud: Principles, practices, and getting started

A common misconception is that cloud environments automatically provide application resiliency, eliminating the need for testing. Although cloud providers do offer various levels of resiliency and SLAs for their cloud products, these alone do not guarantee that your business applications are protected. If applications are not designed to be fault-tolerant or if they assume constant availability of cloud services, they will fail when a particular cloud service they depend on is not available. ... As a proactive discipline, chaos engineering enables organizations to identify weaknesses in their systems before they lead to significant outages or failures, where a system includes not only the technology components but also the people and processes of an organization. By introducing controlled, real-world disruptions, chaos engineering helps test a system's robustness, recoverability, and fault tolerance. This approach allows teams to uncover potential vulnerabilities, so that systems are better equipped to handle unexpected events and continue functioning smoothly under stress. ... Chaos Toolkit is an open-source framework written in Python that provides a modular architecture where you can plug in other libraries (also known as ‘drivers’) to extend your chaos engineering experiments. ... to enable Google Cloud customers and engineers to introduce chaos testing in their applications, we’ve created a series of Google Cloud-specific chaos engineering recipes. Each recipe covers a specific scenario to introduce chaos in a particular Google Cloud service.


The attack surface you can’t see: Securing your autonomous AI and agentic systems

The deep, non-deterministic nature of the underlying Large Language Models (LLMs) and the complex, multi-step reasoning they perform create systems where key decisions are often unexplainable. When an AI agent performs an unauthorized or destructive action, auditing it becomes nearly impossible. ... When you give an AI agent autonomy and tool access, you create a new class of trusted digital insider. If that agent is compromised, the attacker inherits all its permissions. An autonomous agent, which often has persistent access to critical systems, can be compromised and used to move laterally across the network and escalate privileges. The consequences of this over-permissioning are already being felt. ... The sheer speed and scale of agent autonomy demand a shift from traditional perimeter defense to a Zero Trust model specifically engineered for AI. This is no longer an optional security project; it is an organizational mandate for any leader deploying AI agents at scale. ... Securing Agentic AI is not just about extending your traditional security tools. It requires a new governance framework built for autonomy, not just execution. The complexity of these systems demands a new security playbook focused on control and transparency ... The future of enterprise efficiency is agentic, but the future of enterprise security must be built around controlling that agency. 


Systems that Sustain: Lessons that Nature Never Forgot but We Did

In practice, a major flaw in many technology projects is that existing multi-level approval systems are simply digitalised, leading to only marginal improvements. The process becomes a digital twin of the old: while processing speeds increase, the workflow itself remains long, redundant, and often cumbersome. The introduction of a new digital interface adds to the woes rather than simplifies them. Had processes been genuinely reengineered, digitisation could have saved time by simplifying steps, reducing the training load, improving efficiency, cutting costs, and enabling quicker adaptation in response to change. Another persistent pitfall in public sector digital transformation is misunderstanding the promise of analytics, and more crucially, confusing outputs with outcomes. ... Humans, as players in nature’s game, are unique. Evolution gifted us consciousness, language, memory, and complex social bonds—traits that allowed the creation of technology, law, storytelling, and culture. Yet these very blessings seeded traits antithetical to nature’s raw logic ... Artificial intelligence presents a tantalising prospect. Unlike its human creators, a well-designed AI can, under ideal circumstances, create technologies based on the same bias-free principles that drive nature: redesign for purpose, learn and adapt from data, and commit to real, measurable outcomes. 


California introduces new child safety law aimed at AI chatbots

The law is set to come into effect on Jan. 1, 2026, and requires chatbot operators to implement age verification and warn users of the risks of companion chatbots. The bill implements harsher penalties for anyone profiting from illegal deepfakes, with fines of up to $250,000 per offense. In addition, technology companies must establish protocols that seek to prevent self-harm and suicide. These protocols will have to be shared with the California Department of Health to ensure they’re suitable. Companies will also be required to share statistics on how often their services issue crisis center prevention alerts to their users. Some AI companies have already taken steps to protect children, with OpenAI recently introducing parental controls and content safeguards in ChatGPT, along with a self-harm detection feature. Meanwhile, Character AI has added a disclaimer to its chatbot that reminds users that all chats are generated by AI and fictional. Newsom is no stranger to AI legislation. In September, he signed into law another bill called SB 53, which mandates greater transparency from AI companies. More specifically, it requires AI firms to be fully transparent about the safety protocols they implement, while providing protections for whistleblower employees. The bill means that California is the first U.S. state to require AI chatbots to implement safety protocols, but other states have previously introduced more limited legislation. 


Embedding Security into Enterprise Architecture: A TOGAF-Based Approach to Risk-Aligned Design

Treating security as a separate discipline leads to inefficiencies, redundancies, and vulnerabilities. Bolting on security after systems are designed often results in costly retrofits, fragmented controls, and misaligned priorities. It also creates friction between teams — where security is seen as a blocker rather than a partner. Integrating ESA into EA from the outset changes the dynamic. It ensures that security is considered in every architectural decision — from business processes to data flows, from application design to infrastructure deployment. It aligns security with business goals, reduces risk exposure, and accelerates delivery. ... ISM brings operational rigor to ESA. It defines how security is implemented, monitored, and improved. ISM includes identity and access management, continuity planning, compliance management, and security awareness. When ISM is integrated into EA, security becomes part of the enterprise fabric. It’s not just a set of policies — it’s a way of working. ... This integration is not a technical adjustment — it’s a strategic evolution. It requires collaboration, shared language, and a commitment to embedding security into every architectural decision. When done right, it reduces risk, accelerates delivery, and builds confidence across the enterprise. Security by design is not a luxury — it’s a necessity. And EA Capability is how we make it real.