Showing posts with label AI Factories. Show all posts
Showing posts with label AI Factories. Show all posts

Daily Tech Digest - March 22, 2026


Quote for the day:

“Success does not consist in never making mistakes but in never making the same one a second time.” -- George Bernard Shaw


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 22 mins • Perfect for listening on the go.


Data Readiness as a Product

In "Data Readiness as a Product," Gordon Deudney argues that preparing data for AI agents is not a one-time project but a continuous product capability requiring dedicated ownership, strict SLAs, and rigorous quality gates. He highlights that most AI failures are operational, rooted in "data debt" and a fundamental "semantic gap" where literal-minded agents misinterpret contextually noisy information. A critical distinction is made between static "Knowledge" (best handled via RAG) and dynamic "State" (requiring real-time APIs); confusing the two often leads to costly, inaccurate outputs. Deudney advocates for "Field-Level Truth Cataloging" to resolve systemic ownership conflicts and stresses the importance of codifying specific tie-breaking rules, as agents cannot inherently recognize when they are guessing between conflicting sources. Robust metadata—including provenance, versioning, and time-to-live (TTL) tags—is presented as essential for maintaining an auditable, trustworthy system. Ultimately, the piece asserts that because data quality directly dictates agent behavior, organizations must prioritize resolving their underlying data architecture before deployment. By treating data readiness as a living, evolving product rather than a static foundation, businesses can avoid the "zombie data" and semantic ambiguities that typically derail complex automation efforts.


The inference lattice: One option for how the AI factory model will evolve

The article "The Inference Lattice: One option for how the AI factory model will evolve" explores the necessary architectural shift in data centers as they transition from general-purpose facilities into specialized "AI factories." Currently, the industry relies on a centralized model dominated by massive training clusters; however, the author argues that the future of AI scalability lies in the "Inference Lattice." This concept envisions a distributed, interconnected network of smaller, highly efficient inference nodes that move computation closer to the end-user and data sources. By deconstructing monolithic data center designs into a more fluid and resilient lattice, providers can better manage the extreme power demands and heat densities associated with next-generation GPUs. The piece highlights that while training remains computationally intensive, the vast majority of future AI workloads will be dedicated to inference. To support this, the lattice model offers a way to scale horizontally, reducing latency and improving cost-effectiveness. Ultimately, the article suggests that the evolution of the AI factory will be defined by this move toward decentralized, purpose-built infrastructure that prioritizes the continuous, real-time delivery of "intelligence" over the raw batch processing of the past.


App Modernization in Regulated Industries: Audit Trails, Approvals, and Release Control

Application modernization within regulated sectors like healthcare and finance transcends mere aesthetic updates, prioritizing robust audit trails, orderly approvals, and verifiable release controls. As legacy systems often persist due to familiar manual compliance habits, modernizing these platforms requires a shift from feature-focused development to mapping "regulatory promises." This ensures that record retention, separation of duties, and data access remain provable throughout the transition. Effective modernization replaces fragmented manual processes with integrated digital narratives that capture the "who, what, when, and why" of every action in searchable, tamper-proof logs. Furthermore, the article emphasizes that approval workflows should be risk-stratified—automating low-risk updates while maintaining rigorous sign-offs for high-impact changes—to prevent compliance from becoming a bottleneck. By treating logging and release management as foundational components rather than afterthoughts, organizations can achieve greater agility without compromising safety or regulatory standing. Ultimately, a successful modernization strategy builds a transparent, connected ecosystem where every software version is linked to its specific approvals and intent. This holistic approach allows regulated firms to ship updates confidently, maintain continuous audit readiness, and eliminate the frantic scramble typically associated with formal inspections and technical oversight.


Agentic Architecture Maturity Model (AAMM) How AI Agents Are Redefining Architectural Intelligence

The "Agentic Architecture Maturity Model (AAMM): How AI Agents Are Redefining Architectural Intelligence" article explores a transformative framework designed to modernize enterprise architecture through the integration of autonomous AI agents. The AAMM identifies five levels of maturity, progressing from unmanaged, tribal knowledge to a state of autonomous architecture intelligence where AI systems continuously simulate and optimize the organizational landscape. By moving through stages of formal documentation and structured traceability, enterprises can reach level four, where AI agents actively participate in design reviews and governance, and level five, where they orchestrate complex architectural decisions autonomously. The article highlights critical structural gaps that hinder this evolution, such as documentation drift and the "impact analysis bottleneck," emphasizing that traditional manual governance cannot scale with modern delivery speeds. To bridge these gaps, the author advocates for leveraging emerging technologies like large language models, graph-native enterprise architecture platforms, and architecture-as-code. Ultimately, the AAMM serves as a strategic roadmap for leaders to transition architecture from a passive record-keeping function into a high-leverage, intelligent capability that drives faster transformations, reduces technical debt, and ensures long-term organizational resilience in an increasingly complex digital era.


The Gap Between Buying Security and Actually Having It

The TechSpective article explores the critical discrepancy between investing in cybersecurity tools and achieving genuine protection, often termed the "capability gap." Despite eighty percent of organizations increasing their security budgets for 2026, research from Kroll indicates that a staggering seventy-two percent still face misalignment between security priorities and actual business operations. This disconnect stems from a "know-what-you-have" problem, where organizations purchase high-end technology but fail to configure it according to best practices or account for "security drift" as environments evolve. While executives often favor new technology investments for their optics in board presentations, they frequently deprioritize essential validation activities like red and purple teaming. Consequently, while many firms believe they can respond to incidents within twenty-four hours, actual attacker breakout times are often under thirty minutes. The article highlights that high-maturity organizations—comprising only ten percent of those surveyed—distinguish themselves not by higher spending, but by allocating significant resources toward testing and confirming that their existing controls actually work. Ultimately, the piece warns that without bridging the gap between deployment and validation, especially as AI accelerates emerging threats, the multi-million dollar potential of security tools remains largely unfulfilled and organizations remain vulnerable.


The AI Dilemma: Leadership in the Age of Intelligent Threats

The article "The AI Dilemma: Leadership in the Age of Intelligent Threats" highlights the critical shift of artificial intelligence from an experimental tool to a central executive priority by 2026. While AI offers transformative benefits for cybersecurity, such as automated security operations centers and accelerated threat detection, it simultaneously empowers adversaries through deepfake-enabled fraud, adaptive malware, and automated vulnerability scanning. This "double-edged sword" necessitates a leadership evolution that matches machine speed with governance maturity. Internally, the rise of "vibe coding" and unsanctioned "shadow AI" usage creates significant risks, requiring organizations to implement structured oversight and clear data-sharing practices. To navigate this landscape, leaders must adopt a "human-in-the-loop" model, ensuring that machine pattern recognition is always augmented by human context and ethical judgment. Strategic imperatives include embracing AI for defense responsibly, enhancing continuous monitoring through zero-trust architectures, and updating corporate policies to address AI-specific threats. Ultimately, the article argues that while the future of cybersecurity may resemble an AI-versus-AI contest, organizational success will depend on balancing rapid innovation with disciplined governance. Human oversight remains the foundational element for maintaining security and resilience in an increasingly automated and intelligent threat environment.


Why Agentic AI Demands Intent-Based Chaos Engineering

The DZone article "Why Agentic AI Demands Intent-Based Chaos Engineering" explores the evolution of system resilience in the era of autonomous software. Traditional chaos engineering, which relies on static fault injection like latency or server shutdowns, proves inadequate for AI-driven environments where failures often manifest as subtle quality degradations rather than visible outages. To address this, the author introduces Intent-Based Chaos Engineering, a framework where failure magnitude is derived from environmental risk and business sensitivity. This approach evaluates three critical dimensions: intent parameters (such as SLA thresholds and business criticality), topology data (mapping service dependencies), and a sensitivity index (measuring how components influence inference quality). As AI systems transition toward agentic autonomy—where agents independently trigger remediation, scale infrastructure, and rebalance traffic—the risk of minor disturbances spiraling into systemic instability through automated decision loops increases significantly. By shifting from reactive experimentation to a closed-loop, predictive modeling system, Intent-Based Chaos provides the calibrated stress needed to validate these autonomous agents. Ultimately, this methodology ensures that as AI systems become more complex and independent, their resilience remains grounded in controlled, goal-oriented experimentation, protecting enterprise-scale operations from the unpredictable nature of silent AI degradation.


Cloud at 20: Cost, complexity, and control

As cloud computing reaches its twentieth anniversary, the initial promise of seamless, cost-effective IT has evolved into a sobering landscape of managed complexity. Originally envisioned as a way to reduce overhead through simple pay-as-you-go models, the reality for modern enterprises involves spiraling costs that often eclipse the traditional infrastructure they were meant to replace. This financial strain is compounded by "cloud sprawl," where thousands of workloads across multiple regions create a lack of transparency and unpredictable billing. Beyond economics, the technical promise of outsourcing security and operations has shifted into a new paradigm of operational difficulty. Instead of eliminating IT headaches, the cloud has introduced a "multicloud reality" requiring specialized skills to manage intricate permissions, encryption keys, and interoperability issues across diverse platforms. Consequently, the next era of cloud computing will focus less on the fantasy of total outsourcing and more on rigorous FinOps discipline, continuous security investment, and the strategic orchestration of complex environments. Ultimately, the journey has transformed from a sprint toward simplicity into a marathon of governance, where the goal is no longer to eliminate complexity but to master it through automation and expert oversight.


Digital Banking Experience: A Good Fit for Techfin Firms

The appointment of Nitin Chugh, former digital banking head at State Bank of India, as CEO of Perfios underscores a significant leadership shift within the financial services sector. As digital banking platforms like SBI’s YONO evolve into multifaceted ecosystems encompassing payments, lending, and commerce, the executives behind them are increasingly sought after by TechFin firms. These leaders possess a unique blend of product strategy, platform governance, and regulatory expertise, which is essential for companies providing critical financial infrastructure. TechFin organizations, such as Perfios, are transitioning from being mere tool providers to becoming embedded operational layers for banks and insurers. Their focus areas—including financial data aggregation, credit decisioning, and fraud intelligence—require a deep understanding of how to operationalize technology at scale within strictly regulated environments. Furthermore, the integration of artificial intelligence is revolutionizing these services by enhancing the speed and quality of financial decision-making. This convergence of banking and technology reflects a broader trend where technology leadership is no longer just about execution but about driving digital business growth and ecosystem partnerships. Consequently, the demand for CEOs who can navigate the intersection of traditional finance and enterprise software continues to rise.


AI Governance Moves From Boardrooms To Business Strategy

The Inc42 report, "AI Governance Moves from Boardrooms to Business Strategy," explores a fundamental shift in how Indian enterprises and startups perceive artificial intelligence oversight. Historically treated as a passive compliance matter for boardrooms, AI governance has now transitioned into a pivotal pillar of core business strategy. This evolution is fueled by the realization that trust, transparency, and accountability serve as critical "moats" for companies looking to scale AI beyond initial pilot phases into high-impact, enterprise-wide workflows. The report highlights how robust governance frameworks are being integrated directly into operational roadmaps to mitigate risks such as algorithmic bias and data privacy breaches while simultaneously driving long-term ROI. As India transitions into an AI-first economy, the discourse is moving toward the "monetization depth" of AI, where reliable and explainable models are essential for customer retention and market differentiation. By embedding safety and ethical considerations from the outset, businesses are not only complying with emerging national guidelines but are also positioning themselves as resilient leaders in a globally competitive landscape. Ultimately, the report emphasizes that mature AI governance is no longer a professional development goal but a strategic prerequisite for sustainable growth in the modern corporate ecosystem.

Daily Tech Digest - February 23, 2026


Quote for the day:

"Prepare, work smarter, Learn from your Mistakes. These are the secret to success!" -- Elizabeth McCormick



What’s wrong (and right) with AI coding agents

“At the scale AI is generating pull requests today, humans simply can’t keep up. You don’t check the accuracy of Excel with an abacus… and in 2026 we shouldn’t expect maintainers to manually inspect machine-speed code without machine-speed assistance,” said Fox. “AI reviews can go deeper than humans in many cases. They don’t get tired, they can reason across large codebases… and they can spot patterns at a scale no individual reviewer can hold in their head. If AI is generating more code, the only viable answer is to use AI to help review and validate it. You have to fight fire with fire.” ... He reminds us that quantity does not always equal quality – especially in the AI-driven world we now live in. He notes that, at least for now, the reality is that AI development tools and ‘vibe coding’ can generate a lot of code very quickly, but code that’s often slower and more memory‑hungry than what a skilled developer would write. ... Although this entire discussion is focused on the now-increasingly-automated command line, it feels like the real focus should be higher and architecture has been mentioned already. “We’re entering a world where, with AI, software changes are propagating faster than governance models can track them. That means AI tools are, plain and simple, accelerating systemic complexity. When an AI agent can generate and deploy changes across interconnected enterprise systems, there’s real danger in the invisible dependencies and downstream effects most orgs can’t fully see,” said Ido Gaver


Identity verification systems are struggling with synthetic fraud

The researchers tied the growth of synthetic identity fraud to the increasing use of AI tools, which can generate convincing fake documents that pass casual inspection. “The biggest risk I see in the next 12 to 18 months is the growing and advancing use of AI. AI is creating fake people, fake voices, and fake documents. Bad actors are using these capabilities to open accounts, take over existing accounts, and impersonate real people in places like bank branches,” Lewis said. ... Financial institutions remain a major target for identity fraud due to access to credit, account funding, and cash movement. A successful fraudster can monetize a single fake or synthetic identity for tens of thousands of dollars before detection, making the sector a frequent target. Online-only retail banks recorded the highest rate of failed identity verification among the financial institution categories in Intellicheck’s dataset. The report also found elevated failure rates across businesses serving underbanked consumers, including check cashing, payday lending, subprime lending, and lease-to-own services. ... AI tools are being used to produce synthetic IDs that are difficult for humans to spot. Lewis said attackers are already using AI and large language models to generate documents that can bypass basic checks. “AI and LLM can create fake ID’s that can easily pass the templating test, old methods don’t work and ID verification service providers can’t rest on their laurels,” Lewis said. 


Neoclouds: Meeting demand for AI acceleration

This surge in demand for AI acceleration has seen a surprising benefactor. According to Tiger Research, cryptocurrency mining firms, seeking to reduce their exposure to bitcoin’s volatile pricing, are redirecting their graphics processing unit (GPU) farms toward AI acceleration applications. ... Before the emergence of neoclouds a few years ago, if an organisation wanted to work with AI, it had no choice but to go to a hyperscaler like Amazon Web Services (AWS) or Google. While the hyperscalers offer AI infrastructure as part of their vast public cloud services portfolio, Roy Illsley, chief analyst at Omdia, says the hyperscalers tend to be expensive and, as he recalls, a few years ago, there was very little choice other than Google’s AI offerings. ... AI infrastructure strategies are becoming inherently hybrid and multicloud by design – not as a by-product of supplier sprawl, but as a deliberate response to workload reality. The cloud market is fragmenting along functional lines, and neoclouds occupy a clear and growing role within that landscape. “Neoclouds started as GPU as a service. If you needed GPUs, these companies bought or leased GPUs from Nvidia, and then they would slice them and sell them off to people in smaller groups and bundles,” says Omdia’s Illsley. However, over time, neocloud providers have added software stacks and developed other services to meet the demand of IT buyers who need GPU power and the software stack required for AI training or AI inferencing.


Sam Altman just said what everyone is thinking about AI layoffs

This isn’t the first time industry stakeholders questioned the veracity of AI-related layoffs. A study by Oxford Economics in January this year claimed most layoffs are due to “more traditional drivers” such as overhiring or poor financial performance. ... "While a rising number of firms are pinning job losses on AI, other more traditional drivers of job layoffs are far more commonly cited,” the report said. “What's more, we suspect some firms are trying to dress up layoffs as a good news story rather than bad news, such as past over-hiring." ... “There’s some real displacement by AI of different kinds of jobs,” he said. “We’ll find new kinds of jobs as we do with every tech revolution. I would expect that the real impact of AI doing jobs in the next few years will begin to be palpable.” Altman’s prediction here aligns with research from Gartner and Forrester on the potential impact of AI on the global jobs market. In January, Forrester predicted 10 million jobs could be lost worldwide as enterprise adoption ramps up. ... Despite a string of studies pointing to the contrary, some tech industry figures still believe that AI will eventually render some workers obsolete. In a recent interview with the Financial Times, for example, Microsoft AI CEO Mustafa Suleyman insisted AI will begin replacing “white collar” workers within 18 months. “I think we’re going to have a human-level performance on most if not all professional tasks,” Suleyman told


Jailbreaking the matrix: How researchers are bypassing AI guardrails to make them safer

As AI assistants move from novelty to infrastructure, helping write code, summarizing medical notes and answering customer questions, the biggest question isn't just what these systems can do, but what happens when they are pushed to do what they shouldn't. "By showing exactly how these defenses break, we give AI developers the information they need to build defenses that actually hold up," Jha said. "The public release of powerful AI is only sustainable if the safety measures can withstand real scrutiny, and right now, our work shows that there's still a gap. We want to help close it." ... Focusing on the internal workings of the LLM allows more accurate measurements of failures while encouraging the development of more robust defenses against the failure of safety measures. According to the researchers, HMNS can help reveal whether specific internal pathways, if exploited, could cause a breakdown. That information can guide stronger training, monitoring and defense strategies. ... Understanding the security shortcomings of LLMs is critical as they become more widespread. Companies like Meta, Alibaba and others have released powerful AI models that are available to anyone. While each platform incorporates safety layers meant to keep it from being misused, the UF team has found that those safety layers can be systematically bypassed.


Plan vs. planning: Why continuous planning must traverse time

The problem is not the plan’s quality. The problem is that a plan freezes a moment in time while the organization continues to move through time. Planning, by contrast, must be a continuous discipline, remaining active as assumptions decay, signals emerge and constraints shift. ... Planning exists to test those assumptions continuously, a distinction long recognized in leadership and management literature that separates planning as an ongoing discipline from planning as a static artifact. Plans are optimized for agreement and commitment. Planning is optimized for learning, decision-making and managing consequences in the face of uncertainty. In practice, this means consequences must be visible at the moment of decision, not discovered months later through execution. ... Many enterprises optimize for compliance, predictability and approval at the expense of feedback and adaptation. Learning is pushed downstream, arriving only after outcomes are locked in and costs incurred. Systems theorist Russell Ackoff described this dynamic clearly: “Most organizations are not short of information. They are short of the ability to learn from it.” Continuous planning restores learning by design, not as postmortem analysis, but as pre-decision feedback. Feedback that arrives before commitment changes behavior. Feedback that arrives after execution becomes an explanation. In volatile environments, that timing difference is decisive, which is why scenario planning and structured foresight have re-emerged as critical executive tools.


The rise of AI factories: Powering an era of pervasive intelligence

In India alone, Google is building a gigawatt-scale AI hub in Visakhapatnam. Microsoft is expanding its cloud and AI footprint in Pune and Chennai and creating a new “India South Central” region in Hyderabad. In partnership with NVIDIA, Reliance Jio is developing a major AI data center in Jamnagar for nationwide GPU-as-a-service offerings. TCS is planning a 1-gigawatt AI data center, likely in Gujarat or Maharashtra, to support startups, hyperscalers, and government institutions. And as part of its Stargate project, OpenAI is actively scouting locations in India for what could become one of the largest AI data centers in all of Asia. ... The growth of AI represents a fundamental transformation in how the world builds and operates computing infrastructure. While traditional data centers are designed for general-purpose workloads, AI superclusters are purpose-built facilities that function as industrial-scale intelligence production systems. And their output is defined by new metrics — most notably tokens per watt and tokens per dollar — that quantify the efficiency and productivity of intelligence at scale. ... To deliver the performance at scale that AI requires, silicon designers are increasingly turning to multi-die designs, including 3D integrated circuits (3DIC) and chiplet-based architectures. While these chip designs offer gains that traditional monolithic SoCs cannot achieve cost-effectively, they also introduce significant complexity to the design process.


Cognizant CAIO Babak Hodjat explains how Agentic AI will transform enterprises

One of the things that agentic systems do is they allow for a diversity of data sources because you can actually have an agent responsible for a data source talking to other agents responsible for other data sources. Your interface into this system could be a consolidation of information and decisions that come from these disparate sources. It is the first time that we can actually have a mapping between intent and disparate sources of data and applications. I think that will work well. That kind of design can work well in a country like India with such diversity of data. ... Population-based approaches like genetic algorithms are very good at non-linear optimisation, especially if you are looking at multiple outcomes at the same time. Pretty much every problem that we look at is multi-objective. Every problem that we look at has improved revenue but reduced costs. You look at curing disease but reduce impact on the economy. It is always more than one outcome that we are looking at. In problems like optimisation of power grids or managing urban traffic systems, these are very well-suited algorithms. ... There are two opposing forces when it comes to AI. Scaling laws mean that building bigger is more powerful, and building bigger typically means using more energy. Many companies are looking at green sources for that additional consumption. On the other hand, companies are optimising models to be smaller and less energy-hungry. For multi-agent systems, smaller models can be more cost-effective and greener.


Inference Becomes the Next AI Chip Battleground

Inference has fundamentally different economics and performance requirements than training, said Karl Freund, founder and principal analyst at Cambrian AI Research. Training AI models is a cost center, while inference is a “profit center” that directly generates revenue. Freund and Kimball noted that while GPUs deliver excellent performance, they often carry architectural features optimized for training that don’t always translate to lower latency or higher efficiency in pure inference use cases. Purpose-built inference chips – ASICs and other accelerators – can deliver faster responses, improved energy efficiency, and lower total cost of ownership. ... "As inference workloads exceed the total amount of training workloads in terms of token output, there will be a greater need for diversity because alternative XPU architectures can achieve better efficiency on some specific inferencing tasks,” said Brendan Burke, research director of semiconductors, supply chain, and emerging tech at Futurum Group. ... Inference opportunities span data centers and the edge, and requirements vary widely by workload and deployment. “The inference you do in your autonomous vehicle is far different than the inferencing you do when you’re an online customer service bot,” Kimball said. ... Analysts expect Nvidia to maintain dominance in both training and inference, but diverse requirements create space for specialized solutions to capture share. 


Why the CFO's Playbook Belongs on Every CIO's Desk

Recent research from Gartner on how CFOs are allocating budgets gives CIOs insight into what priorities look like across departments, and where technology and AI can help move the needle. The research firm's CFO Report: Q1 2026 finds that while budgets are shifting and AI ambitions are high, enterprise-wide AI success remains an aspiration rather than a reality. ... AI is also changing the conversation on ROI for both finance and technology leaders. "There's a lot more to evaluating the success of some of this investment in technology than simply just ROI, and AI is definitely helping change that," Abbasi said. "AI isn't your traditional asset." Unlike standard hardware expenditures, AI investments don't have predictable depreciation curves, and the ways in which returns on AI investment may show up across the business can vary. They may manifest in time to market, customer satisfaction or competitive positioning, not just in cost savings, Abbasi said. CIOs should be sure to articulate how AI will generate strategic returns rather than focus on pitching it as a capital project. "It changes the way you measure the effectiveness of AI, as well as how you measure your business more holistically," he said. "It's not like a traditional asset because you don't necessarily know what the outcomes are going to be for some of these AI projects."

Daily Tech Digest - February 26, 2025


Quote for the day:

“Happiness is a butterfly, which when pursued, is always beyond your grasp, but which, if you will sit down quietly, may alight upon you.” -- Nathaniel Hawthorne


Deep dive into Agentic AI stack

The Tool / Retrieval Layer forms the backbone of an intelligent agent’s ability to gather, process, and apply knowledge. It enables the agent to retrieve relevant information from diverse data sources, ensuring it has the necessary context to make informed decisions and execute tasks effectively. By integrating various databases, APIs, and knowledge structures, this layer acts as a bridge between raw data and actionable intelligence, equipping the agent with a robust understanding of its environment. ... The Action / Orchestration Layer is a critical component in an intelligent agent’s architecture, responsible for transforming insights and understanding into concrete, executable actions. It serves as the bridge between perception and execution, ensuring that workflows are effectively managed, tasks are executed efficiently, and system interactions remain seamless. This layer must handle the complexity of decision-making, automation, and resource coordination while maintaining adaptability to dynamic conditions. ... The Reasoning Layer is where the agent’s cognitive processes take place, enabling it to analyse data, understand context, draw inferences, and make informed decisions. This layer bridges raw data retrieval and actionable execution by leveraging advanced AI models and structured reasoning techniques. 


AI Hijacked: New Jailbreak Exploits Chain-of-Thought

Several current AI models use chain-of-thought reasoning, an AI technique that helps large language models solve problems by breaking them down into a series of logical steps. The process aims to improve performance and safety by enabling the AI to verify its outputs. But "reasoning" also exposes a new attack surface, allowing adversaries to manipulate the AI's safety mechanisms. A research team comprising experts from Duke University, Accenture and Taiwan's National Tsing Hua University, found a vulnerability in how the models processed and displayed their reasoning. They developed a dataset called Malicious-Educator to test the vulnerability, designing prompts that tricked the models into overriding their built-in safety checks. These adversarial prompts exploited the AI's intermediate reasoning process, which is often displayed in user interfaces. ... The researchers acknowledged that they could be facilitating further jailbreaking attacks by publishing the Malicious-Educator dataset but argued that studying these vulnerabilities openly is necessary to develop stronger AI safety measures. A key distinction in this research is its focus on cloud-based models. AI models running in the cloud often include hidden safety filters that block harmful input prompts and moderate output in real-time. Local models lack these automatic safeguards unless users implement them manually. 


What CISOs need from the board: Mutual respect on expectations

The CISO requires specific and sustained support from the board to effectively protect the organization from cyber threats. A strong partnership between the CISO and board is essential for establishing and maintaining robust cybersecurity practices. My favourite saying one that CISO Robert Veres relayed to me: The board should support the “Red” and challenge the “Green.” This support is exactly what the CISO requires as a foundation. The board must help set the overall strategic direction that aligns with the organization’s risk appetite. This high-level guidance provides the framework within which the CISO can develop and implement security programs. While the CISO establishes the cyber risk culture, they need the board to reinforce this by setting the appropriate tone from the top and ensuring cybersecurity compliance is prioritized across all levels of management and business units. This is a difficult task for some boards as they may lack a good understanding of business and integration of the technology strategy. A critical requirement is for the CISO to have a strong mandate to operate with clear accountability. They need the authority to act and defend the enterprise without excessive interference, allowing them to respond quickly and effectively to emerging threats.


AI-Powered Ransomware Attacks

Consolidating artificial consciousness (simulated intelligence) into cyberattacks is, in a general sense, changing the dangerous scene, creating difficulties for all people and organizations. Generally, digital dangers have been, to a great extent, manual, depending on the inventiveness and flexibility of the aggressor. The idea of these dangers has developed as artificial brainpower has become more computerized, versatile, and practical. AI-based assaults can dissect immense measures of information to recognize weaknesses and send off profoundly designated phishing efforts to spread the most recent malware with negligible human intercession. The speed and execution of computer-based intelligence-fueled assaults imply that dangers can arise more suddenly than at any time in recent memory. For instance, simulated intelligence can mechanize the surveillance and observation stages and guide targets rapidly and precisely. This fast weakness, recognizable proof permits aggressors to take advantage of weaknesses before they are fixed, giving organizations less chance to respond. Additionally, AI can create modified malware that constantly evolves to evade detection using traditional security frameworks, making it more difficult to defend against.


AI Factories: Separating Hype From Reality

While the concept is compelling, will we see this wave of AI factories that Jensen is promising? Probably not at scale. AI hardware is not only costly to acquire and operate, but it also doesn’t run continuously like a database server. Once a model is trained, it may not need updates for months, leaving this expensive infrastructure sitting idle. For that reason, Alan Howard, senior analyst at Omdia specializing in infrastructure and data centers, believes most AI hardware deployments will occur in multipurpose data centers. ... AI tech advances rapidly, and keeping up with the competition is prohibitively expensive, Palaniappan added. “When you start looking at how much each of these GPUs cost, and it gets outdated quite pretty quickly, that becomes bottleneck,” he said. “If you are trying to leverage a data center, you’re always looking for the latest chip in the in the facility, so many of these data centers are losing money because of these efforts.” ... In addition to the cost of the GPUs, significant investment is required for networking hardware, as all the GPUs need to communicate with each other efficiently. Tom Traugott, senior vice president of strategy at EdgeCore Digital Infrastructure, explains that in a typical eight-GPU Nvidia DGX system, the GPUs communicate via NVlink. 


Overcoming Challenges of IT Integration in Cross-Border M&As

When companies agree to combine, things get complicated, particularly when blending their IT and digital operations. To that end, organizations must carefully outline how they plan to merge their IT departments to overcome associated challenges and avoid expensive disruptions. ... IT is the cornerstone of most multinational corporations. Determining how each merger participant will mesh its systems with the other is significant, particularly because 47% of M&A deals fail because of IT problems. IT due diligence is paramount. Not only does the process help identify priorities and risks beforehand, but it also lets the acquiring company properly evaluate the technical capabilities of the firm it intends to purchase. ... Cross-border M&As are subject to data privacy and compliance regulations that vary significantly across jurisdictions. When assessing an international merger, ensure there aren't any non-compliance risks and that the firm being acquired operates legitimately. Be aware of complex international data and privacy laws. Address any irregularities with a strong compliance strategy and retain expert legal counsel before signing the deal. ... In fact, cultural mismatch is one of the top reasons why M&As fail. 


10 machine learning mistakes and how to avoid them

Addressing biases is crucial to success in the modern AI landscape, Swita says. “Best practices include implementing continuous surveillance, alerting mechanisms, and content filtering to help proactively identify and rectify biased content,” he says. “Through these methodologies, organizations can develop AI frameworks that prioritize validated content.” To resolve bias, organizations need to embrace a dynamic approach that includes continually refining systems to keep pace with rapidly evolving models, Swita says. “Strategies need to be meticulously tailored for combating bias,” he says. ... Machine learning comes with certain legal and ethical risks. Legal risks include discrimination due to model bias, data privacy violations, security leaks, and intellectual property violations. These and other risks can have repercussions for developers and users of machine learning systems. Ethical risks include the potential for harm or exploitation, misuse of data, lack of transparency, and lack of accountability. Decisions based on machine learning algorithms can negatively affect individuals, even if that was not the intent. Swita reiterates the need to anchor models and output on trusted, validated, and regulated data. “By adhering to regulations and standards governing data usage and privacy, organizations can reduce the legal and ethical risks associated with machine learning,” he said.


Beyond the Buzz: What 2025's Tech Trends Mean for CIOs

Stemming from the large-scale deployment of AI is the issue of governance. Organizations need to use AI securely, responsibly and with accountability. A DLA Piper survey showed that 96% of firms using AI find governing AI systems a challenge. Some companies are already at the forefront of providing AI governance solutions. For instance, IBM Watsonx provides AI life cycle governance, risk management and regulatory compliance. Cisco AI Defense offers AI visibility, automated vulnerability scanning and real-time protections for AI assets. ... The rise of deepfakes and countless AI-generated misinformation campaigns have made disinformation security a crucial, non-negotiable imperative for enterprises. Although AI-based detection systems and blockchain-backed verification systems are evolving, they still lag behind the sophistication of adversarial tactics, pushing organizations toward adopting robust detection mechanisms and resilience strategies. ... Application of ambient intelligence in healthcare monitoring and for improving customer experience is already in the works. For instance, in early 2024, Texas-based Houston Methodist forged a partnership with Apella, a startup that uses ambient sensor technology and AI to improve surgical processes in operating rooms. 


AI, automation spur efforts to upskill network pros

By developing skills in networking monitoring, performance management, and cost optimization through automation and AI-powered tools, networking pros can become more adept at troubleshooting while offloading repetitive tasks such as copy-pasting configurations. Over time, they can gain the skills to better understand which behaviors and patterns to automate. According to Skillsoft’s Stanger, networking professionals can be challenged in finding the appropriate tasks and workflows to automate. ... “The continuous growth in cloud technologies ensures that cloud computing skills remain in high demand. This includes a thorough understanding of cloud infrastructure and services, which is becoming crucial,” Randstad’s Heins says. “Particularly challenging for companies to find are skills related to cloud service management, especially when combined with AI competencies.” Designing the appropriate network infrastructure, especially for cloud-first and hybrid environments, will be critical for networking pros looking to support sophisticated cloud environments. According to Greg Fuller, chief evangelist and vice president of Skillsoft/Codecademy, cloud computing, in some cases, can lead to complacency in networking as it allows more flexibility to spin up networks quickly.


The future of data security and governance: Why organizations must rethink their strategy

AI is transforming industries, but it’s also introducing new risks. Businesses are racing to integrate AI-powered applications, often without fully understanding the implications for data security and compliance. AI models require vast amounts of data, much of it sensitive, and without proper governance, they can become a significant liability. ... Regulatory bodies worldwide are tightening their grip on data privacy and security. From GDPR and CCPA to emerging AI regulations, the compliance landscape is becoming increasingly complex. Businesses can no longer afford to treat compliance as an afterthought — it must be embedded into every aspect of data management. ... Enterprises now manage data across multiple cloud environments, SaaS applications and third-party vendors. The result? A complex web of data assets, many of which are unprotected and difficult to track. Security teams struggle with: Lack of visibility into where sensitive data resides; Data access governance challenges; Increased vulnerability to cyber threats and insider risks ... The time to act is now. Security, AI, risk and governance leaders must take a proactive approach to data security and governance, ensuring that their organizations are not just reacting to threats but staying ahead of them.