Showing posts with label cloud repatriation. Show all posts
Showing posts with label cloud repatriation. Show all posts

Daily Tech Digest - December 26, 2025


Quote for the day:

“Rarely have I seen a situation where doing less than the other guy is a good strategy.” -- Jimmy Spithill



Is Your Enterprise Architecture Ready for AI?

The old model of building, deploying, and governing apps is being reshaped into a composable enterprise blueprint. By abstracting complexity through visual models and machine intelligence, businesses are creating systems that are faster to adapt yet demand stronger governance, interoperability, and security. What emerges is not just acceleration but transformation at the foundation. ... With AI copilots spitting out code at scale, the traditional software development life cycle faces an existential test. Developers may not fully understand every line of AI-generated code, making manual reviews insufficient. The solution: automate aggressively. ... This new era also demands AI observability in SDLC, tracking provenance, explainability, and liability. Provenance shows the chain of prompts and responses. Explainability clarifies decisions. Bias and drift monitoring ensure AI systems don’t quietly shift into harmful or unreliable patterns. Without these, enterprises risk blind trust in black-box code. ... The destination for enterprises is clear: AI-native enterprise architecture and composable enterprise blueprint strategies, where every capability is exposed as an API and orchestrated by LCNC and AI. The road, however, is slowed by legacy monoliths in industries like banking and healthcare. These systems won’t vanish overnight. Instead, strategies like wrapping monoliths with APIs and gradually replacing components will define the journey. 


After LLMs and agents, the next AI frontier: video language models

World models — which some refer to as video language models — are the new frontier in AI, following in the footsteps of the iconic ChatGPT and more recently, AI agents. Current AI tech largely affects digital outcomes, but world models will allow AI to improve physical outcomes. World models are designed to help robots understand the physical world around them, allowing them to track, identify and memorize objects. On top of that, just like humans planning their future, world models allow robots to determine what comes next — and plan their actions accordingly. ... Beyond robotics, world models simulate real-world scenarios. They could be used to improve safety features for autonomous cars or simulate a factory floor to train employees. World models pair human experiences with AI in the real world, said Deepak Seth, director analyst at Gartner. “This human experience and what we see around us, what’s going on around us, is part of that world model, which language models are currently lacking,” Seth said. ... World models are one of several tools that will be used to deploy robots in the real world, and they will continue to improve, said Kenny Siebert, AI research engineer at Standard Bots. But the models suffer from similar problems — the hallucinations and degradation — that affect the likes of ChatGPT and video-generators. Moving hallucinations into the physical world could cause harm, so researchers are trying to solve those kinds of issues.


Hub & Spoke: The Operating System for AI-Enabled Enterprise Architecture

Today most enterprises still run on heroics, emails, slide decks, and 200-person conference calls. Even when a good repository and healthy collaboration culture exist, nothing “sticks” without a mechanism that relentlessly harvests reality, unifies understanding, and broadcasts the right truth to the right person at the right moment. That mechanism is a new application of hub-and-spoke – not just for data integration, but for architecture governance itself. We call it simply Hub & Spoke. ... At the centre runs a continuous cycle of three actions: Harvest – Ingest everything that matters: scanner output, CI/CD metadata, application inventories, risk registers, process models, meeting outcomes, human feedback, and (increasingly) agentic AI crawls; Unify – Connect the dots. Establish relationships, resolve duplicates, detect patterns and anti-patterns, and maintain one coherent model of the enterprise; and Broadcast – Push the right view, in the right language, through the right channel, at the right time. A CIO sees strategic heatmaps; a developer receives contextual architecture guardrails inside the IDE; a regulator gets a compliance report on demand. ... To fully leverage the H.U.B. actions, we apply them to five fundamental capabilities that drive any organisation, encapsulated in S.P.O.K.E.: Stakeholders – who cares and who decides; Processes – sequences that deliver value; Outcome – the why (always placed in the centre of the model); Knowledge – codified artefacts (models, policies, decisions, blueprints); and Enterprise Assets – systems, data, infrastructure, contracts 


Orchestrating value: The new discipline of continuous digital transformation

The most important principle for any CIO today is deceptively simple: every transformation must begin with value and be engineered for agility. In a volatile and fast-moving environment, success depends not on how much technology you deploy, but on how effectively you align it to outcomes that matter. Every initiative should begin with clarity of purpose. What is the value hypothesis? What problem are we solving? Who owns the outcome, and when will impact be visible? ... Architecture then becomes the critical enabler. Agility must be built into the design, through modular platforms, adaptable processes, and feedback-driven operating models that allow business change, talent movement, and technological evolution to coexist seamlessly. Measurement turns agility from theory into discipline. Continuous value reviews, architectural checkpoints, and strategy resets ensure transformation remains evidence-led rather than aspirational. Every initiative must answer three questions: Why value? Why now? Why this architecture? In a world defined by velocity and volatility, transformation isn’t about doing more – it’s about doing what matters, faster, smarter, and with enduring value. ... Today’s CIOs also demand composable, interoperable platforms that integrate seamlessly into existing ecosystems, avoiding vendor lock-in while accelerating scale through APIs, microservices, and modular architectures. Partners must bring both agility and discipline – speed balanced with governance.


Why Integration Debt Threatens Enterprise AI and Modernization

AI agents rely on fast, trusted data exchanges across applications. However, point-to-point connectors often break under new query loads. Matt McLarty of MuleSoft states that integration challenges slow digital transformation. Integration Debt surfaces here as latent System Friction that derails AI pilots. Furthermore, developers spend 39% of their time writing custom glue code. Consequently, innovation budgets shrink while maintenance backlogs grow. Such opportunity cost defines Integration Debt in real dollars and morale. Disconnected integrations throttle AI benefits and drain talent. In contrast, scale introduces additional complexity exposed next. ... Effective governance establishes shared schemas, versioning, and certification for every API. Nevertheless, shadow IT and citizen developers complicate enforcement. Therefore, leading CIOs create integration review boards with quarterly scorecards. Accenture and Deloitte embed such controls in Modernization playbooks to prevent relapse. Additionally, companies publish portal dashboards that display live Integration Debt metrics to executives. ... The evidence is clear: disconnected architectures tax innovation, security, and profits. Ramsey Theory Group reminds leaders that random complexity often concentrates risk in surprising places. Similarly, unchecked System Friction erodes developer morale and board confidence. However, organizations that quantify debt, enforce governance, and adopt reusable APIs accelerate Modernization success. 


The Widening AI Value Gap: Strategic Imperatives for Business Leaders

AI value creation in business settings extends far beyond narrow efficiency gains or cost reductions. Contemporary frameworks increasingly distinguish between three fundamental pathways through which AI generates economic returns: deploying efficiency-enhancing tools, reshaping existing workflows, and inventing entirely new business models ... Reshaping represents a more ambitious approach, targeting core business workflows for end-to-end transformation. Rather than automating existing steps in isolation, reshaping asks: How would we design this workflow from scratch if AI capabilities were available from the outset? This might involve redesigning marketing campaign development to leverage AI-driven personalization at scale, restructuring supply chain management around predictive demand algorithms, or reimagining customer service through intelligent agent orchestration. ... Value measurement frameworks must capture both tangible and strategic dimensions. Tangible metrics include revenue increases (projected at 14.2% for future-built companies in areas where AI applies by 2028), cost reductions (9.6% for leaders), and measurable improvements in key performance indicators such as time-to-hire, customer satisfaction scores, and defect rates ... The strategic implications extend beyond near-term financial performance. Organizations trailing in AI maturity face deteriorating competitive positions as digital-native competitors and AI-advanced incumbents reshape industry economics.


4 mandates for CIOs to bridge the AI trust gap

As a CIO, you must recognize that low trust in public AI eventually seeps into the enterprise. If your customers or employees see AI being used unethically in media scenarios through misinformation and bias, or in personal scenarios like cybercrime, their skepticism will bleed into your enterprise-grade CRM or HR systems. The recommendation is to build on the existing trust in the workplace. Use the enterprise as a model for responsible deployment. Document and communicate your AI internal usage policies with exceptional clarity, and allow this transparency to be your market differentiator. Show your customers and partners the standards you hold your internal AI to, and then extrapolate those standards to your external products. ... For CIOs in highly regulated industries such as finance and healthcare, the mandate is to not just maintain but elevate the current level of rigor. The existing regulatory compliance is the baseline, not the ceiling, and the market will punish the first major breach or bias incident, undoing years of consumer confidence. ... We must stop telling end users AI is trustworthy and start showing them through tangible experience. Trust is a feature that must be designed from the start, not something patched in later. The first step is to involve the customer. Implement co-design programs where the end-users and customers, not just product managers, are involved in the design and testing phases of new AI applications. 


The Enterprise “Anti-Cloud” Thesis: Repatriation of AI Workloads to On-Premises Infrastructure

Today, a new inflection point has arrived: the dawn of artificial intelligence and large-scale model training. Running in parallel is an observable and rapidly growing trend in which companies are repatriating AI workloads from the public cloud to on-premises environments. This “anti-cloud” thesis represents a readjustment, rather than a backlash, mirroring other historical shifts in leadership in which prescience reordered entire industries. As Gartner has remarked, “By 2025, 60% of organizations will use sovereignty requirements as a primary factor in selecting cloud providers.” ... Navigating this transition requires fundamentally different abilities, integrating deep technical fluency with disciplined strategic thinking. AI infrastructure differs sharply from other traditional cloud workloads in that it is compute-intensive, highly resource-intensive, latency-sensitive, and tightly connected with data governance. ... The repatriation of AI workloads brings several challenges: lack of AI infrastructure talent, high upfront GPU procurement costs, operational overhead, security risks, and sustainability concerns. Leaders must manage hardware supply chain volatility, model reliability, and energy efficiency. Lacking disciplined governance, repatriation creates a high risk of cost overruns and fragmentation. The central challenge is to balance innovation with control, calling for transparency of plans and scenario modeling.


The Fragile Edge: Chaos Engineering For Reliable IoT

Chaos engineering is mostly used in cloud environments because it works very well there. However, it is more difficult to apply to IoT and edge computing systems. IoT devices are physical, often located in remote places and sometimes perform critical tasks. This makes managing them even more challenging. Restarting cloud servers using scripts is usually simple. But rebooting medical devices like pacemakers, industrial robots or warehouse sensors is much more complex and can be dangerous. Resetting edge devices also takes longer because system failures often have immediate physical outcomes. Chaos engineering in IoT systems has both benefits and challenges. Engineers need to design methods to test failures safely without harming devices. The testing process aims to detect equipment breakdowns while developing systems that function during actual operational conditions. The proven cloud software methods of chaos engineering enable organisations to meet the requirements of edge devices. ... The implementation of chaos engineering for IoT systems requires both strategic planning and innovative solutions. Engineers should perform system vulnerability tests, which ensure operational safety and reliability for real world deployment. The risk assessment process needs tested and accurate methods to protect both system devices and their users from harm. ... Organisations need to maintain ethical standards when they use chaos engineering to safeguard their IoT systems. Engineers who want to perform IoT chaos testing need to follow established safety protocols.


Can Agentic AI operate independently within secure parameters

Context-aware security, enabled by Agentic AI, is essential for effective NHI management. This approach goes beyond traditional methods by understanding the context within which NHIs operate. It evaluates the ownership, permissions, and usage patterns, offering invaluable insights into potential vulnerabilities. By employing context-aware security, organizations can surpass the limitations of point solutions, such as secret scanners, which provide only surface-level protection. ... With the proliferation of digital identities, organizations must adopt a comprehensive approach that incorporates both technological advancements and strategic oversight. Agentic AI, with its ability to operate independently, aligns perfectly with this need, offering a robust framework that supports the secure management of machine identities across various industries. Given the increasing complexities of digital, organizations must continuously evolve their cybersecurity strategies. ... For enterprises navigating complex regulatory environments, predictive insights from AI models can forecast potential compliance issues, allowing preemptive action. When regulations evolve, this foresight is invaluable in maintaining adherence without resource-intensive overhauls of existing processes. ... Investing in AI-driven strategies ensures that organizations can withstand disruptions, safeguarding both operational functions and reputation. 

Daily Tech Digest - December 14, 2025


Quote for the day:

“It is never too late to be what you might have been.” -- George Eliot


Six questions to ask when crafting an AI enablement plan

As we near the end of 2025, there are two inconvenient truths about AI that every CISO needs to take into their heart. Truth #1: Every employee who can is using generative AI tools for their job. Even when your company doesn’t provide an account for them, even when your policy forbids it, even when the employee has to pay out of pocket. Truth #2: Every employee who uses generative AI will (or likely has already) provided this AI with internal and confidential company information. ... In the case of AI, this refers to the difference between the approved business apps that are trusted to access company data and the growing number of untrusted and unmanaged apps that have access to that data without the knowledge of IT or security teams. Essentially, employees are using unmonitored devices, which can hold any number of unknown AI apps, and each of those apps can introduce a whole lot of risk to sensitive corporate data. ... Simply put, organizations cannot afford to wait any longer to get a handle on AI governance. ... So now, the job is to craft an AI enablement plan that promotes productive use and throttles reckless behaviors. ... Think back to the mid‑2000s, when SaaS crept into the enterprise through expense reports and project trackers. IT tried to blacklist unvetted domains, finance balked at credit‑card sprawl, and legal wondered whether customer data belonged on “someone else’s computer.” Eventually, we accepted that the workplace had evolved, and SaaS became essential to modern business.


Why most enterprise AI coding pilots underperform (Hint: It's not the model)

When organizations introduce agentic tools without addressing workflow and environment, productivity can decline. A randomized control study this year showed that developers who used AI assistance in unchanged workflows completed tasks more slowly, largely due to verification, rework and confusion around intent. The lesson is straightforward: Autonomy without orchestration rarely yields efficiency. ... Security and governance, too, demand a shift in mindset. AI-generated code introduces new forms of risk: Unvetted dependencies, subtle license violations and undocumented modules that escape peer review. Mature teams are beginning to integrate agentic activity directly into their CI/CD pipelines, treating agents as autonomous contributors whose work must pass the same static analysis, audit logging and approval gates as any human developer. GitHub’s own documentation highlights this trajectory, positioning Copilot Agents not as replacements for engineers but as orchestrated participants in secure, reviewable workflows. ... Under the hood, agentic coding is less a tooling problem than a data problem. Every context snapshot, test iteration and code revision becomes a form of structured data that must be stored, indexed and reused. As these agents proliferate, enterprises will find themselves managing an entirely new data layer: One that captures not just what was built, but how it was reasoned about. 


Enabling small language models to solve complex reasoning tasks

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed a collaborative approach where an LLM does the planning, then divvies up the legwork of that strategy among smaller ones. Their method helps small LMs provide more accurate responses than leading LLMs like OpenAI’s GPT-4o, and approach the precision of top reasoning systems such as o1, while being more efficient than both. Their framework, called “Distributional Constraints by Inference Programming with Language Models” (or “DisCIPL”), has a large model steer smaller “follower” models toward precise responses when writing things like text blurbs, grocery lists with budgets, and travel itineraries. ... You may think that larger-scale LMs are “better” at complex prompts than smaller ones when it comes to accuracy and efficiency. DisCIPL suggests a surprising counterpoint for these tasks: If you can combine the strengths of smaller models instead, you may just see an efficiency bump with similar results. The researchers note that, in theory, you can plug in dozens of LMs to work together in the DisCIPL framework, regardless of size. In writing and reasoning experiments, they went with GPT-4o as their “planner LM,” which is one of the models that helps ChatGPT generate responses. 


Key trends accelerating Industrial Secure Remote Access (ISRA) Adoption

As essential maintenance and diagnostic activities continue to shift toward remote and digital execution, they become exposed to cyber risks that were not present when plants, fleets, and factories operated as isolated, closed systems. Compounding the challenge, many industrial organizations still lack the expertise and skill sets to select and operate the proper technologies that establish secure remote connections efficiently and securely. This, unfortunately, results in operational delays and slower response in critical or emergency situations. Industrial Cyber emphasizes that controlled, identity-bound, and fully auditable access to critical tasks is key to ensuring secure remote access functions as an operational and business enabler—without introducing new pathways for malicious actors. ... Compounding the risk, OT environments frequently rely on legacy hardware that lacks modern encryption capabilities, leaving these connections especially vulnerable. By centralizing access governance, securely managing vendor credentials, streamlining access-request workflows, and maintaining consistent audit trails, industrial organizations can regain control over third-party access. ... Industrial Cyber recognizes two solutions from SSH. 1) PrivX OT is purpose-built for industrial environments. The solution provides passwordless, keyless, and just-in-time industrial secure remote access using short-lived certificates and micro-segmentation to reduce risk. 2) NQX delivers quantum-safe, high-speed network encryption for site-to-site connectivity.


Navigating AI Liability: What Businesses That Utilize AI Need to Know

Cybercriminals can now use generative AI to create extremely convincing deepfakes. These deepfakes can then be used for corporate espionage, identity theft and phishing scams. AI software may end up automatically aggregating and analyzing huge amounts of data from multiple sources. This can increase privacy invasion risks when comprehensive profiles of people are compiled without their awareness or consent. AI systems which experience glitches or malfunctions, let others have unauthorized access to them, or lack robust security could lead to sensitive data being exposed. ... It is risky for your business to publish AI-generated content because AI models are trained on vast amounts of copyrighted material. The models thus end up not always creating original material, and sometimes create material which is identical to or extremely similar to copyrighted content. “It was the AI’s fault” will not be a valid argument in court if this happens to your business. Ignorance is not a defense in a copyright infringement claim. ... Content that is fully generated by AI has no copyright protection. AI-generated content that is significantly edited by humans may receive copyright protection, but the situation is murky. Original content that is created by humans and is then slightly edited or optimized by AI will usually receive full copyright protection. A lot of businesses now document the process of content creation to prove that humans created the content and preserve copyright protection.


When the Cloud Comes Home: What DBAs Need to Know About Cloud Repatriation

One of the main drivers for cloud repatriation is cost. Early cloud migrations were often justified by projected savings because there would be no more hardware to maintain. Furthermore, the cloud promised flexible scaling and pay-as-you-go pricing. Nevertheless, for many enterprises, those savings have proven elusive. Data-intensive workloads, in particular, can rack up significant cloud bills. Every I/O operation, network transfer, and storage request adds up. When workloads are steady and predictable, the cloud’s on-demand elasticity can actually become more expensive than on-prem capacity. DBAs, who often have a front-row seat to performance and utilization metrics, can play a crucial role in identifying when cloud costs are out of alignment with business value. ... In highly regulated industries, compliance concerns are another driver. Regulations such as HIPAA, PCI-DSS, GDPR and more, require your applications and the data they access to be secure and controlled. Organizations may find that managing sensitive data in the cloud introduces risk, especially when data residency, auditability, or encryption requirements evolve. Repatriating workloads can restore a sense of control and predictability—key traits valued by DBAs. ... Today’s computing needs demand an IT architecture that embraces the cloud, but also on premises workloads, including the mainframe. Remember, data gravity attracts applications to where the data resides. 


SaaS price hikes put CIOs’ budgets in a bind

Subscription prices from major SaaS vendors have risen sharply in recent months, putting many CIOs in a bind as they struggle to stay within their IT budgets. ... While inflation may have driven some cost increases in past months, rates have since stabilized, meaning there are other factors at play, Tucciarone says. Vendors are justifying subscription price hikes with frequent product repackaging schemes, consumption-based subscription models, regional pricing adjustments, and evolving generative AI offerings, he adds. “Vendors are rationalizing this as the cost of innovation and gen AI development,” he says. ... SaaS data platforms fall into a similar category as other mission-critical applications, Aymé adds, because the cost of moving an organization’s data can be prohibitively expensive, in addition to the price of a new SaaS tool. Kunal Agarwal, CEO and cofounder of data observability platform Unravel Data, also pointed to price increases for data-related SaaS tools. Data infrastructure costs, including cloud data warehouses, lakehouses, and analytics platforms, have risen 30% to 50% in the past year, he says. Several factors are driving cost increases, including the proliferation of computing-intensive gen AI workloads and a lack of visibility into organizational consumption, he adds. “Unlike traditional SaaS, where you’re paying for seats, these platforms bill based on consumption, making costs highly variable and difficult to predict,” Agarwal says.


How to simplify enterprise cybersecurity through effective identity management

“It is challenging for a lot of organizations to get a complete picture of what their assets are and what controls apply to those assets,” Persaud says. He explains that Deloitte’s identity solution assisted the customer in connecting users with the assets they utilized. As they discovered these assets, they were able to fine-tune the security controls that were applied to each in a more refined fashion. “If the system is going to [process] financial data and other private information, we need to put the right controls in place on the identity side,” he says. “We’ve been able to bring those two pieces together by correlating discovery of assets with discovery of identity and lining that up with controls from the IT asset management system.” ... “If you think from a broader risk management perspective, this has been fundamental to our security model,” he says. The ability to simply track the locations of employees and assign risk accordingly is a significant advancement in risk monitoring for a company growing its international presence. The company looks out for instances of impossible travel, such as if an employee has entered the system in one location and then in another at a distant location that they could not have possibly reached during a specified period, an alert is raised. Security analysts also use the software to scan for risky sign-ins. If a user logs in from an IP that has been blacklisted, an alert is raised. They have increasingly relied on conditional access policies that rely on monitoring user behavior. 



When an AI Agent Says ‘I Agree,’ Who’s Consenting?

The most autonomous agents can execute a chain of actions related to a transaction—such as comparing, booking, paying, forwarding the invoice. The broader the autonomy, the tighter the frame: precise contractual rules, allow-lists, budgets, a kill-switch, clear user notices, and, where required, electronic signatures. At this point the question stops being technical and becomes legal: under what framework does each agent-made click have effect, on whose authority, and with what safeguards? European law and national laws already offer solid anchors—agency and online contracting, signatures and secure payments, fair disclosure—now joined by the newer eIDAS 2 and the AI Act. ... Under European law, an AI agent has no will of its own. It is a means of expressing—or failing to express—someone’s will. Legally, someone always consents: the user (consumer) or a representative in the civil law sense. If an agent “accepts” an offer, we are back to agency: the act binds the principal only within the authority granted; beyond that, it is unenforceable. The agent is not a new subject of law. ... Who is on the hook if consent is tainted? First, the business that designs the onboarding. Europe’s Digital Services Act (DSA) bans deceptive interfaces (“dark patterns”) that materially impair a user’s ability to make a free, informed choice. A pushy interface can support a finding of civil fraud and a regulatory breach. Second, the principal is bound only within the mandate. 


AI cybercrime agents will strike in 2026: Are defenders ready?

The prediction itself isn’t novel. What’s sobering is the math behind it—and the widening gap between how fast organisations can defend versus how quickly they’re being attacked. “The groups that convert intelligence into monetisation the fastest will set the tempo,” Rashish Pandey, VP of marketing & communications for APAC at Fortinet, told journalists at a media briefing earlier this week. “Throughput defines impact.” This isn’t about whether AI will be weaponised—that’s already happening. The urgent question is whether defenders can close what Fortinet calls the “tempo differential” before autonomous AI agents fundamentally alter the economics of cybercrime. ... The evolution extends beyond speed. Fortinet’s predictions highlight how attackers are weaponising generative AI for rapid data analysis—sifting through stolen information to identify the most valuable targets and optimal extortion strategies before defenders even detect the breach. This aligns with broader attack trends: ransomware operations increasingly blend system disruption with data theft and multi-stage extortion. Critical infrastructure sectors—healthcare, manufacturing, utilities—face heightened risk as operational technology systems become targets. ... “The ‘skills gap’ is less about scarcity and more about alignment—matching expertise to the reality of machine-speed, data-driven operations,” Pandey noted during the briefing.

Daily Tech Digest - December 07, 2025


Quote for the day:

"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone



Balancing AI innovation and cost: The new FinOps mandate

Yet as AI moves from pilot to production, an uncomfortable truth is emerging: AI is expensive. Not because of reckless spending, but because the economics of AI are unlike anything technology leaders have managed before. Most CIOs and CTOs underestimate the financial complexity of scaling AI. Models that double in size can consume ten times the compute. Exponential should be your watchword. Inference workloads run continuously, consuming GPU cycles long after training ends, which creates a higher ongoing cost compared to traditional IT projects. ... The irony is that even as AI drives operational efficiency, its own operating costs are becoming one of the biggest drags on IT budgets. IDC’s research shows that, without tighter alignment between line of business, finance, and platform engineering, enterprises risk turning AI from an innovation catalyst into a financial liability. ... AI workloads cut across infrastructure, application development, data governance, and business operations. Many AI workloads will run in a hybrid environment, meaning cost impacts for on-premises as well as cloud and SaaS are expected. Managing this multicloud and hybrid landscape demands a unified operating model that connects technical telemetry with financial insight. The new FinOps leader will need fluency in both IT engineering and economics — a rare but rapidly growing skill set that will define next-generation IT leadership.


Local clouds shape Europe’s AI future

The new “sovereign” offerings from US-based cloud providers like Microsoft, AWS, and Google represent a significant step forward. They are building cloud regions within the EU, promising that customer data will remain local, be overseen by European citizens, and comply with EU laws. They’ve hired local staff, established European governance, and crafted agreements to meet strict EU regulations. The goal is to reassure customers and satisfy regulators. For European organizations facing tough questions, these steps often feel inadequate. Regardless of how localized the infrastructure is, most global cloud giants still have their headquarters in the United States, subject to US law and potential political pressure. There is always a lingering, albeit theoretical, risk that the US government might assert legal or administrative rights over data stored in Europe. ... As more European organizations pursue digital transformation and AI-driven growth, the evidence is mounting: The new sovereign cloud solutions launched by the global tech giants aren’t winning over the market’s most sensitive or risk-averse customers. Those who require freedom from foreign jurisdiction and total assurance that their data is shielded from all external interference are voting with their budgets for the homegrown players. ... In the months and years ahead, I predict that Europe’s own clouds—backed by strong local partnerships and deep familiarity with regulatory nuance—will serve as the true engine for the region’s AI ambitions.


When Innovation and Risks Collide: Hexnode and Asia’s Cybersecurity Paradox

“If you look at the way most cyberattacks happen today—take ransomware, for example—they often begin with one compromised account. From there, attackers try to move laterally across the network, hunting for high-value data or systems. By segmenting the network and requiring re-authentication at each step, ZT essentially blocks that free movement. It’s a “verify first, then grant access” philosophy, and it dramatically reduces the attacker’s options,” Pavithran explained. Unfortunately, way too many organisations still view Zero Trust as a tool rather than a strategic framework. Others believe it requires ripping out existing infrastructure. In reality, however, Zero Trust can be implemented incrementally and is both adaptable and scalable. It integrates technologies such as multifactor authentication, microsegmentation, and identity and access management into a cohesive architecture. Crucially, Zero Trust is not a one-off project. It is a continuous process of monitoring, verification, and fine-tuning. As threats evolve, so too must policies and controls. “Zero Trust isn’t a box you check and move on from,” Pavithran emphasised. “It’s a continuous, evolving process. Threats evolve, technologies evolve, and so do business needs. That means policies and controls need to be constantly reviewed and fine-tuned. It’s about continuous monitoring and ongoing vigilance—making sure that every access request, every single time, is both appropriate and secure.”


CIOs take note: talent will walk without real training and leadership

“Attracting and retaining talent is a problem, so things are outsourced,” says the CIO of a small healthcare company with an IT team of three. “You offload the responsibility and free up internal resources at the risk of losing know-how in the company. But at the moment, we have no other choice. We can’t offer the salaries of a large private group, and IT talent changes jobs every two years, so keeping people motivated is difficult. We hire a candidate, go through the training, and see them grow only to see them leave. But our sector is highly specialized and the necessary skills are rare.” ... CIOs also recognize the importance of following people closely, empowering them, and giving them a precise and relevant role that enhances motivation. It’s also essential to collaborate with the HR function to develop tools for welfare and well-being. According to the Gi Group study, the factors that IT candidates in Italy consider a priority when choosing an employer are, in descending order, salary, a hybrid job offer, work-life balance, the possibility of covering roles that don’t involve high stress levels, and opportunities for career advancement and professional growth. But there’s another aspect that helps solve the age-old issue of talent management. CIOs need to recognize more of the role of their leadership. At the moment, Italian IT directors place it at the bottom of their key qualities. 


Rethinking the CIO-CISO Dynamic in the Age of AI

Today's CIOs are perpetual jugglers, balancing budgets and helping spur technology innovation at speed while making sure IT goals are aligned with business priorities, especially when it comes to navigating mandates from boards and senior leaders to streamline and drive efficiency through the latest AI solutions. ... "The most common concern with having the CISO report into legal is that legal is not technically inclined," she said. "This is actually a positive as cybersecurity has become more of a business-enabling function over a technological one. It also requires the CISO to translate tech-speak into language that is understandable by non-tech leaders in the organization and incorporate business and strategic drivers." As organizations undergo digital transformation and incorporate AI into their tech stacks, more are creating alternate C-suite roles such as "Chief Digital Officer" and "Chief AI Officer."  ... When it comes to AI systems, the CISO's organization may be better positioned to lead enterprise-wide transformation, Sacolick said. AI systems are nondeterministic - they can produce different outputs and follow different computational paths even when given the exact same input - and this type of technology may be better suited for CISOs. CIOs have operated in the world of deterministic IT systems, where code, infrastructure systems, testing frameworks and automation provide predictable and consistent outputs, while CISOs are immersed in a world of ever-changing, unpredictable threats.


The AI reckoning: How boards can evolve

AI-savvy boards will be able to help their companies navigate these risks and opportunities. According to a 2025 MIT study, organizations with digitally and AI-savvy boards outperform their peers by 10.9 percentage points in return on equity, while those without are 3.8 percent below their industry average.5 What boards should do, however, is the bigger question—and the focus of this article. The intensity of the board’s role will depend on the extent to which AI is likely to affect the business and its competitive dynamics and the resulting risks and opportunities. Those competitive dynamics should shape the company’s AI posture and the board’s governance stance. ... What matters is that the board aligns on the business’s aspirational strategy using a clear view of the opportunities and risks so that it can tailor the governance approach. As the business gains greater experience with AI, the board can modify its posture. ... Directors should focus on determining whether management has the entrepreneurial experience, technological know-how, and transformational leadership experience to run an AI-driven business. The board’s role is particularly important in scrutinizing the sustainability of these ventures—including required skills, implications on the traditional business, and energy consumption—while having a clear view of the range of risks to address, such as data privacy, cybersecurity, the global regulatory environment, and intellectual property (IP).


Do Tariffs Solicit Cyber Attention? Escalating Risk in a Fractured Supply Chain

Offensive cyber operations are a fourth possibility largely serving to achieve the tactical and strategic objectives of decisionmakers, or in the case of tariff imposition, retaliation. Depending on its goals, a government may use the cyber domain to steal sensitive information such as amount and duration of a potential tariff or try to ascertain the short- and long-term intent of the tariff-imposing government. A second option may be a more aggressive response, executing disruptive operations to signal its dissatisfaction over tariff rates. ... It’s tempting to think of tariffs as purely a policy lever, and a way to increase revenue or ratchet up pressure on foreign governments. But in today’s interconnected world, trade policy and cybersecurity policy are deeply intertwined. When they aren’t aligned, companies risk becoming collateral damage in the larger geopolitical space, where hostile actors jockey to not only steal data for profit, but also look to steal secrets, compromise infrastructure, and undermine trust. This offers adversaries new ways to facilitate cyber intrusion to accomplish all of these objectives, requiring organizations to up their efforts in countering these threats via a variety of established practices. These include rigorous third-party vetting; continuous monitoring of third-party access through updates, remote connections, and network interfaces; implementing zero trust architecture; and designing incident response playbooks specifically around supply-chain breaches, counterfeit-hardware incidents, and firmware-level intrusions.


Resilience: How Leaders Build Organizations That Bend, Not Break

Resilient leaders don’t aim to restore what was; they reinvent what’s next. Leadership today is less about stability and more about elasticity—the ability to stretch, adapt, and rebound without breaking. ... Resilient cultures don’t eliminate risk—they absorb it. Leaders who privilege learning over blame and transparency over perfection create teams that can think clearly under pressure. In my companies, we’ve operationalized this with short, ritualized cadences—weekly priorities, daily huddles, and tight AARs that focus on behavior, not ego. The goal is never to defend a plan; it’s to upgrade it. ... “Resilience is mostly about adaptation rather than risk mitigation.” The distinction matters. Risk mitigation reduces downside. Adaptation converts disruption into forward motion. The organizations that redefine their categories after shocks aren’t the ones that avoid volatility; they’re the ones that metabolize it. ... In uncertainty, people don’t expect perfection—they expect presence. Transparent leadership doesn’t eliminate volatility, but it changes how teams experience it. Silence erodes trust faster than any market correction; people fill gaps with assumptions that are worse than reality. ... Treat resilience as design, not reaction. Build cultures that absorb shock, operating systems that learn fast, and communication habits that anchor trust. In an era where strategy half-life keeps shrinking, these are the leaders—and organizations—that won’t just survive volatility. 


AI-Powered Quality Engineering: How Generative Models Are Rewriting Test Strategies

Despite significant investments in automation, many organizations still struggle with the same bottlenecks. Test suites often collapse due to minor UI changes. Maintenance cycles grow longer each quarter. Even mature teams rarely achieve effective coverage that truly exceeds 70-80%. Regression cycles stretch for days or weeks, slowing down release velocity and diluting confidence across engineering teams. It isn’t just productivity that suffers; it’s trust. These problems reduce teams’ confidence in releasing immediately and diminish automation ROI in addition to slowing down delivery. Traditional test automation has reached its limits because it automates execution, not understanding. And this is exactly where Generative AI changes the conversation. ... Synthetic data that mirrors production variability can be produced without waiting for dependent systems. Scripts no longer break every time a button shifts. As AI self-heal selectors and locators without human assistance, tests start to regenerate themselves. While predictive signals identify defects early through examining past data and patterns, natural-language inputs streamline test descriptions. ... GenAI isn’t magic, though. When generative models are fed ambiguous input, they can produce brittle or incorrect test cases. Ing­esting production logs without adequate anonymization introduces privacy and compliance risks. Risks to data privacy and compliance must be considered while using production traces. 


The Great Cloud Exodus: Why European Companies Are Massively Returning to Their Own Infrastructure

Many European managers and policymakers live under the assumption that when they choose "Region Western Europe" (often physically located in datacenters around Amsterdam or Eemshaven), their data is safely shielded from American interference. "The data is in our country, isn't it?" is the oft-heard defense. This is, legally speaking, a dangerous illusion. American legislation doesn't look at the ground on which the server stands, but at who holds the keys to the front door. ... The legal criterion is not the location of the server, but the control ("possession, custody, or control") that the American parent company has over the data. Since Microsoft Corporation in Redmond, Washington, has full control over subsidiary Microsoft Netherlands BV, data in the datacenter in the Wieringermeer legally falls under the direct scope of an American subpoena. ... Additionally, Microsoft applies "consistent global pricing," meaning European customers often see additional increases to align Euro prices with the strong US dollar. This makes budgeting a nightmare of foreign exchange risks. AWS shows a similar pattern. The complexity of the AWS bill is now notorious; an entire industry of "FinOps" consultants has emerged to help companies understand their invoice. ... or organizations seeking ultimate control and data sovereignty, purchasing own hardware and placing it in a Dutch datacenter is the best option. This approach combines the advantages of on-premise with the infrastructure of a professional datacenter.

Daily Tech Digest - September 26, 2025


Quote for the day:

“You may be disappointed if you fail, but you are doomed if you don’t try.” -- Beverly Sills



Moving Beyond Compliance to True Resilience

Organisations that treat compliance as the finish line are missing the bigger picture. Compliance frameworks such as HIPAA, GDPR, and PCI-DSS provide critical guidelines, but they are not designed to cover the full spectrum of evolving cyber threats. Cybercriminals today use AI-driven reconnaissance, deepfake impersonations, and polymorphic phishing techniques to bypass traditional defences. Meanwhile, businesses face growing attack surfaces from hybrid work models and interconnected systems. A lack of leadership commitment, underfunded security programs, and inadequate employee training exacerbate the problem. ... Building resilience requires more than reactive policies, it calls for layered, proactive defence mechanisms such as threat intelligence, endpoint detection and response (EDR), and intrusion prevention systems (IPS). These are essential in identifying and stopping threats before they can cause damage which should be at the front line of defence. Ultimately reducing exposure and giving teams the visibility they need to act swiftly. ... True cyber resilience means moving beyond regulatory compliance to develop strategic capabilities that protect against, respond to, and recover from evolving threats. This includes implementing both offensive and defensive security layers, such as penetration testing and real-time intrusion prevention, to identify weaknesses before attackers do.


Architecture Debt vs Technical Debt: Why Companies Confuse Them and What It Costs Business

The contrast is clear: technical debt reflects inefficiencies at the system level — poorly structured code, outdated infrastructure, or quick fixes that pile up over time. Architecture debt emerges at the enterprise level — structural weaknesses across applications, data, and processes that manifest as duplication, fragmentation, and misalignment. One constrains IT efficiency; the other constrains business competitiveness. Recognizing this difference is the first step toward making the right strategic investments. ... The difference lies in visibility: technical debt is tangible for developers, showing up in unstable code, infrastructure issues, and delayed releases. Architecture debt, by contrast, hides in organizational complexity: duplicated platforms, fragmented data, and misaligned processes. When CIOs and business leaders hear the word “debt,” they often assume it refers to the same challenge. It does not. ... Recognizing this distinction is critical because it determines where investments should be made. Addressing technical debt improves efficiency within systems; addressing architecture debt strengthens the foundations of the enterprise. One enables smoother operations, while the other ensures long-term competitiveness and resilience. Leaders who fail to separate the two-risk solving local problems while leaving the structural weaknesses that undermine the organization’s future unchallenged.


Data Fitness in the Age of Emerging Privacy Regulations

Enter the concept of Data Fitness: a multidimensional measure of how well data aligns with privacy principles, business objectives, and operational resilience. Much like physical fitness, data fitness is not a one-time achievement but a continuous discipline. Data fitness is not just about having high-quality data, but also about ensuring that data is managed in a way that is compliant, secure, and aligned with business objectives. ... The emerging privacy regulations have also introduced a new layer of complexity to data management. They shift the focus from simply collecting and monetizing data to a more responsible and transparent approach, which call for sweeping review and redesign of all applications and processes that handles data. ... The days of storing customer data forever are over. New regulations often specify that personal data can only be retained for as long as it's needed for the purpose for which it was collected. This requires companies to implement robust data lifecycle management and automated deletion policies. ... Data privacy isn't just an IT or legal issue; it's a shared responsibility. Organizations must educate and train all employees on the importance of data protection and the specific policies they need to follow. A strong privacy culture can be a competitive advantage, building customer trust and loyalty. ... It's no longer just about leveraging data for profit; it's about being a responsible steward of personal information. 


Independent Management of Cloud Secrets

An independent approach to NHI management can empower DevOps teams by automating the lifecycle of secrets and identities, thus ensuring that security doesn’t compromise speed or agility. By embedding secrets management into the development pipeline, teams can preemptively address potential overlaps and misconfigurations, as highlighted in the resource on common secrets security misconfigurations. Moreover, NHIs’ automation capabilities can assist DevOps enterprises in meeting regulatory audit requirements without derailing their agile processes. This harmonious blend of compliance and agility allows for a framework that effectively bridges the gap between speed and security. ... Automation of NHI lifecycle processes not only saves time but also fortifies systems by means of stringent access control. This is critical in large-scale cloud deployments, automated renewal and revocation of secrets ensure uninterrupted and secure operations. More insightful strategies can be explored in Secrets Security Management During Development. ... While the integration of systems provides comprehensive security benefits, there is an inherent risk in over-relying on interconnected solutions. Enterprises need a balanced approach that allows for collaboration between systems without compromising individual segment vulnerabilities. A delicate balance is found by maintaining independent secrets management systems, which operate cohesively but remain distinct from operational systems. 


Why cloud repatriation is back on the CIO agenda

Cost pressure often stems from workload shape. Steady, always-on services do not benefit from pay-as-you-go pricing. Rightsizing, reservations and architecture optimization will often close the gap, yet some services still carry a higher unit cost when they remain in public cloud. A placement change then becomes a sensible option. Three observations support a measurement-first approach. Many organizations report that managing cloud spend is their top challenge; egress fees and associated patterns affect a growing share of firms, and the finops community places unit economics and allocation at the centre of cost accountability. ... Public cloud remains viable for many regulated workloads, assisted by sovereign configurations. Examples include the AWS European Sovereign Cloud (scheduled to be released at the end of 2025), the Microsoft EU Data Boundary and Google’s sovereign controls and partner offerings. These options have scope limits that should be assessed during design. Public cloud remains viable for many regulated workloads when sovereign configurations meet requirements. ... Repatriation tends to underperform where workloads are inherently elastic or seasonal, where high-value managed services would need to be replicated at significant opportunity cost, where the organization lacks the run maturity for private platforms, or where the cost issues relate primarily to tagging, idle resources or discount coverage that a FinOps reset can address.


Colocation meets regulation

While there have been many instances of behind-the-meter agreements in the data center sector, the AWS-Talen agreement differed in both scale and choice of energy. Unlike previous instances, often utilizing onsite renewables, the AWS deal involved a regional key generation asset, which provides consistent and reliable power to the grid. As a result, to secure the go-ahead, PJM Interconnection, the regional transmission operator in charge of the utility services in the state, had to apply for an amendment to the plant's existing Interconnection Service Agreement (ISA), permitting the increased power supply. However, rather than the swift approval the companies hoped for, two major utilities that operate in the region, Exelon and American Electric Power (AEP), vehemently opposed the amended ISA, submitting a formal objection to its provisions. ... Since the rejection by FERC, Talen and AWS have reimagined the agreement, with it moving from behind to an in-front-of-the-meter arrangement. The 17-year PPA will see Talen supply AWS with 1.92GW of power, ramped up over the next seven years, with the power provided through PJM. This reflects a broader move within the sector, with both Talen and nuclear energy generator Constellation indicating their intention to focus on grid-based arrangements going forward. Despite this, Phillips still believes that under the correct circumstances, colocation can be a powerful tool, especially for AI and hyperscale cloud deployments seeking to scale quickly.


Employees learn nothing from phishing security training, and this is why

Phishing training programs are a popular tactic aimed at reducing the risk of a successful phishing attack. They may be performed annually or over time, and typically, employees will be asked to watch and learn from instructional materials. They may also receive fake phishing emails sent by a training partner over time, and if they click on suspicious links within them, these failures to spot a phishing email are recorded. ... "Taken together, our results suggest that anti-phishing training programs, in their current and commonly deployed forms, are unlikely to offer significant practical value in reducing phishing risks," the researchers said. According to the researchers, a lack of engagement in modern cybersecurity training programs is to blame, with engagement rates often recorded as less than a minute or none at all. When there is no engagement with learning materials, it's unsurprising that there is no impact. ... To combat this problem, the team suggests that, for a better return on investment in phishing protection, a pivot to more technical help could work. For example, imposing two or multi-factor authentication (2FA/MFA) on endpoint devices, and enforcing credential sharing and use on only trusted domains. That's not to say that phishing programs don't have a place in the corporate world. We should also go back to the basics of engaging learners. 


SOC teams face 51-second breach reality—Manual response times are officially dead

When it takes just 51 seconds for attackers to breach and move laterally, SOC teams need more help. ... Most SOC teams first aim to extend ROI from existing operations investments. Gartner's 2025 Hype Cycle for Security Operations notes that organizations want more value from current tools while enhancing them with AI to handle an expansive threat landscape. William Blair & Company's Sept. 18 note on CrowdStrike predicts that "agentic AI potentially represents a 100x opportunity in terms of the number of assets to secure," with TAM projected to grow from $140 billion this year to $300 billion by 2030. ... Kurtz's observation reflects concerns among SOC leaders and CISOs across industries. VentureBeat sees enterprises experimenting with differentiated architectures to solve governance challenges. Shlomo Kramer, co-founder and CEO of Cato Networks, offered a complementary view in a VentureBeat interview: "Cato uses AI extensively… But AI alone can't solve the range of problems facing IT teams. The right architecture is important both for gathering the data needed to drive AI engines, but also to tackle challenges like agility, connecting enterprise edges, and user experience." Kramer added, "Good AI starts with good data. Cato logs petabytes weekly, capturing metadata from every transaction across the SASE Cloud Platform. We enrich that data lake with hundreds of threat feeds, enabling threat hunting, anomaly detection, and network degradation detection."


Timeless inclusive design techniques for a world of agentic AI

Progressive enhancement and inclusive design allow us to design for as many users as possible. They are core components of user-centered design. The word "user" often hides the complex magnificence of the human being using your product, in all their beautiful diversity. And it’s this rich diversity that makes inclusive design so important. We are all different, and use things differently. While you enjoy that sense of marvel at the richness and wonder of your users' lives, there is no need to feel it for AI agents. These agents are essentially just super-charged "stochastic parrots" (to borrow a phrase from esteemed AI ethicist and professor of Computational Linguistics Emily M. Bender) guessing the next token. ... Every breakthrough since we learnt to make fire has been built on what came before. Isaac Newton said he could only see so far because he was "standing on the shoulders of giants". The techniques and approaches needed to enable this new wave of agent-powered AI devices have been around for a long time. But they haven't always been used. In our desire to ship the shiniest features, we often forget to make our products work for people who rely on accessibility features. ... Patterns are things like adding a "skip to content link" and implementing form validation in a way that makes it easier to recover from errors. Alongside patterns, there are a wealth of freely available accessibility testing tools that can tell you if your product is meeting necessary standards.


Stronger Resilience Starts with Better Dependency Mapping

As recent disruptions made painfully clear, you cannot manage what you cannot see. When a single upstream failure ripples through eligibility checks, billing, scheduling, or clinical systems, executives need answers in minutes, not months. Who is impacted? What services are degraded? Which applications are truly critical? What are our fourth-party exposures? In too many organizations, those answers require a scavenger hunt. ... Modern operations rely on external platforms for authorizations, payments, data enrichment, analytics, and communications, yet many organizations stop their mapping at the data center boundary. That blind spot creates serious risk, since a single vendor outage can ripple across multiple critical services. Regulators are responding. In the U.S., the OCC, Federal Reserve, and FDIC’s 2023 Interagency Guidance on Third-Party Risk Management requires banks to identify and monitor critical vendor relationships, including subcontractors and concentration risks. ... Dependency data without impact data is trivia. Mapping is only valuable when assets and services are tied to business impact analysis (BIA) outputs like recovery time objectives and maximum tolerable downtime. Without this, leaders face a flat picture of connections but no way to prioritize what to restore first, or how long they can operate without a service before consequences cascade.

Daily Tech Digest - September 21, 2025


Quote for the day:

"The world's most deadly disease is hardening of the attitudes." -- Zig Ziglar



AI sharpens threat detection — but could it dull human analyst skills?

While AI offers clear advantages, there are real risks when used without caution. Blind trust in AI-generated recommendations can lead to missed threats or incorrect actions, especially when professionals rely too heavily on prebuilt threat scores or automated responses. A lack of curiosity to validate findings weakens analysis and limits learning opportunities from edge cases or anomalies. This mirrors patterns seen in internet search behavior, where users often skim for quick answers rather than dig deeper. It bypasses critical thinking that strengthens neural connections and sparks new ideas. In cybersecurity — where stakes are high and threats evolve fast — human validation and healthy skepticism remain essential. ... AI literacy is becoming a must-have skill for cybersecurity teams, especially as more organizations adopt automation to handle growing threat volumes. Incorporating AI education into security training and tabletop exercises helps professionals stay sharp and confident when working alongside intelligent tools. When teams can spot AI bias or recognize hallucinated outputs, they’re less likely to take automated insights at face value. This kind of awareness supports better judgment and more effective responses. It also pays off, as organizations that use security AI and automation extensively save an average of $2.22 million in prevention costs. 


Repatriation games: the mid-market reevaluates its public cloud consumption

Many IT decision-makers were quick to blame public cloud service providers. But it’s more likely that the applications and workloads were never intended for public cloud environments. Or that cloud-enabled applications and workloads were incorrectly configured. Either way, poor application and workload performance meant that the expected efficiency gains and cost savings from public cloud adoption did not materialize. This led to budgeting and resourcing problems, as well as friction between IT management, senior leadership teams, and other stakeholders. ... Concerns over data sovereignty and compliance have also influenced decisions to repatriate public cloud workloads and adopt a hybrid cloud model, particularly due to worries about DORA, GDPR and the US Cloud Act compliance. DORA and GDPR both place greater emphasis on data sovereignty, so organizations need to have greater control over where their data resides. This makes a strong case for repatriation of specific workloads to maintain compliance with both sets of regulations – especially within highly regulated industries or for sensitive information such as HR or financial data. ... Nearly a third of respondents say cybersecurity specialists are the most difficult roles to hire or retain. Some mid-market organizations may lack the in-house skills to configure and manage cybersecurity in public cloud environments or even understand their default settings. 


A guide to de-risking enterprise-wide financial transformation

Distilling the lessons from these large-scale initiatives, a clear blueprint emerges for leaders embarking on their own transformation journeys:Define a data-driven vision: A successful transformation begins with a clear vision for how data will function as a strategic asset. The goal should be to create a single source of truth that is granular, accessible and enables a shift from reactive reporting to proactive analysis. Lead with process, not technology: Technology is an enabler, not the solution itself. Invest heavily in understanding and harmonizing end-to-end business processes before a single line of code is written. This effort is the foundation for a sustainable, low-customization system. De-risk with a phased, modular approach: Avoid the “big bang.” Break the program into logical phases, delivering tangible business value at each step. This builds momentum, facilitates organizational learning and significantly reduces the risk of catastrophic failure. Prioritize the user experience: Even the most powerful system will fail if it is not adopted. Engage end users throughout the design and implementation process. Build intuitive tools, like the FIRST microsite, and invest in robust training and change management to drive adoption and proficiency. ... Such forums are critical for breaking down silos and ensuring the end-to-end process is optimized.  ... Transforming the financial core of a global technology leader is not merely a technical undertaking, it is a strategic imperative for enabling scale, agility and insight.


5 things IT managers get wrong about upskilling tech teams

One of the most pervasive issues in IT upskilling is what Patrice Williams-Lindo, CEO at career coaching service Career Nomad, called the “training-and-forgetting” approach. “Many managers send teams to training without any plan for application,” she said. “Employees return to overloaded sprints” with no guidance on how to incorporate what they’ve learned. Without application in their work, “new skills atrophy fast.” This problem is rooted in basic learning science.  ... Another major pitfall is the overemphasis on certifications as proof of capability. Managers often assume that a certification is going to solve a problem without considering whether it fits the day-to-day job, said Tim Beerman, CTO at managed service provider Ensono. What’s more, certification alone doesn’t equal real-world capability and doesn’t necessarily indicate that a person is competent, according to CGS’ Stephen. While a certification shows that someone has the capability to obtain learned knowledge, he said, it doesn’t guarantee practical application skills. ... Many IT managers fall into the trap of pursuing trendy technologies without connecting them to actual business needs. Williams-Lindo warned that focusing on hype skills without business alignment backfires. While AI, cloud, and blockchain sound strategic, she said, if they aren’t tied to current or near-future business objectives, teams will spend time learning irrelevant tools while core needs are ignored.


Gen AI risks are getting clearer. How much would you pay for digital trust?

“As AI becomes more pervasive and kind of invades various dimensions of our lives and our work, how we interact with it and how safe and trustworthy it is, has become paramount,” said Dan Hays ... What do trust and safety issues look like, when it comes to AI agents in customer interactions? Hays gave several examples: Should AI agents remember everything that a particular customer says to them, or should it “forget” interactions, particularly as years or decades pass? The memory capabilities of bots also relate to the question of, what parameters should be placed on how AI agents are allowed to interact with customers? ... “As organizations across nearly all industries dive head-first into AI and digital transformations, they’re running into new risks that could undermine the trust they’ve built with consumers. Right now, many don’t have the guardrails or experience to handle these evolving threats — and the ripple effects are being felt across entire companies and industries,” the PwC report said. However, it seems that people who can, are willing to pay for digital environments and services that they can trust — much like subscribers to paywalled content sites can generally trust what they are getting, while those looking for free news might end up reading information that is garbled or deliberately twisted with the help of AI.


Object Storage: The Last Line of Defense Against Ransomware

Object storage provides intrinsic advantages in immutability, as it does not provide “edit in place” functionality as with file systems which are designed to allow direct file modifications. Unlike traditional file or block storage, object storage interacts through “get and put” access and write APIs, which means malware and ransomware actors have to attempt to write (or overwrite modified objects) via the API to the object store. ... As ransomware continues to evolve, organizations must design storage strategies that protect at every level. Cyber resilience in the storage layer involves a layered defense that spans architecture, APIs, and operational practices. ... A successful data center attack not only disrupts service but also undermines the partner’s reputation for reliability. Technology partners must demonstrate their infrastructure can isolate tenants, withstand attacks, and deliver continuous availability even in adverse conditions. In both cases, cyber-resilient storage is no longer optional. ... Business continuity leaders should prioritize S3-compatible object storage with ransomware-proof capabilities such as object locking, versioning, and multi-layered access controls. Just as importantly, they should evaluate whether their current storage platforms deliver end-to-end cyber resilience that spans both technology and process.


Time to Embrace Offensive Security for True Resilience

Offensive engagements utilize an attacker mindset to focus on truly exploitable weaknesses, weeding out the noise of unprioritized lists of vulnerabilities. Through remediation of high-impact findings, organizations prevent spreading resources over low-impact issues. Additionally, offloading sophisticated simulations to specialized teams or utilizing automated penetration testing speeds testing cycles and maximizes security investments. Essentially, each dollar invested in offensive testing can pre-empt multiples of breach response, legal penalties, lost productivity, and reputational loss. Successful security testing takes more than shallow scans; it needs fully immersed, real-world simulations that mimic the methods employed by actual threat actors to test your systems. Below is an overview of the most effective methods: ... Red teaming exercises goes beyond standard testing by simulating skilled threat actors with secretive, multi-step attack scenarios. These exercises check not just technical weaknesses but also the organization’s ability to notice, respond to, and recover from real security breaches. Red teams often use methods like social engineering, lateral movement, and privilege escalation to test incident response teams. This uncovers flaws in technology and human procedures during realistic attack simulations.


7 Enterprise Architecture Best Practices for 2025

The foundational principle of effective enterprise architecture is its direct and unbreakable link to business strategy. This alignment ensures that every technological decision, architectural blueprint, and IT investment serves a clear business purpose. It transforms the EA function from a cost center focused on technical standards into a strategic partner that drives business value, innovation, and competitive advantage. ... Adopting a framework establishes a shared understanding among stakeholders, from IT teams to business leaders. It provides a standardized set of tools, templates, and terminologies, which reduces ambiguity and improves communication. This structured approach is fundamental to creating a holistic and integrated view of the enterprise, allowing architects to manage complexity, mitigate risks, and align technology initiatives with strategic goals in a systematic way. ... While a strong strategy provides the direction for enterprise architecture, robust governance provides the necessary guardrails and decision-making framework to keep it on track. EA governance establishes the processes, standards, and controls that ensure architectural decisions align with business objectives and are implemented consistently across the organization. It transforms architecture from a set of recommendations into an enforceable, value-driven discipline. 


Why Cloud Repatriation is Critical Post-VMware Exit

What began as a tactical necessity evolved into an expensive operational habit, with monthly bills that continue climbing without corresponding business value. The rush to cloud often bypassed careful workload assessment, resulting in applications running in expensive public cloud environments that would be more cost-effective on-premises. ... Equally important, the technology landscape has evolved since the initial cloud migration wave. We now have universal infrastructure-wide operating platforms that deliver cloud-like experiences on-premises, eliminating the operational gaps that initially drove workloads to public cloud. Combined with universal migration capabilities that can move workloads seamlessly from any source—whether VMware, other hypervisors, or major cloud providers—organizations finally have the tools needed to make cloud repatriation both technically feasible and economically compelling. ... The forced VMware migration creates the perfect opportunity to reassess the entire infrastructure portfolio holistically rather than making isolated platform decisions. ... This infrastructure reset enables IT teams to ask fundamental questions that operational inertia prevents: Which workloads benefit from cloud deployment? What applications could run more affordably on modern on-premises infrastructure? How can we optimize our total infrastructure spend across both on-premises and cloud environments?


4 Ways AI Revolutionizes Modern Cybersecurity Strategy

AI's true value doesn't lie in marketing promises, but in concrete results(link is external), such as reducing false positives, cutting detection time, and reducing operational costs. These are documented results from organizations that have implemented AI-human collaboration models balancing automation with expert judgment. This capability significantly exceeds the efficiency of human security teams, fundamentally transforming threat detection and response. Imagine a zero-day exploit detected and contained within minutes, not days, drastically reducing the window of vulnerability. ... Accelerating the transformation of legacy code represents one of the most impactful ways organizations are using AI to mitigate vulnerabilities. Legacy code accounts for a staggering 70% of identified vulnerabilities(link is external), but manually overhauling monolithic code bases is rarely feasible. Security teams know these vulnerabilities exist, but often lack the resources to address them. ... Manual SBOM creation cannot scale, not even for a 10-person startup. DevSecOps teams already stretched thin can't reasonably be expected to monitor the thousands of components in modern software stacks. Any sustainable approach to SBOM management for software-producing organizations must necessarily include automation. ... Compliance remains one of security's greatest frictions.