Showing posts with label hiring. Show all posts
Showing posts with label hiring. Show all posts

Daily Tech Digest - February 27, 2026


Quote for the day:

"The best leaders build teams that don’t rely on them. That’s true excellence." -- Gordon Tredgold



Ransomware groups switch to stealthy attacks and long-term access

“Ransomware groups no longer treat vulnerabilities as isolated entry points,” says Aviral Verma, lead threat intelligence analyst at penetration testing and cybersecurity services firm Securin. “They assemble them into deliberate exploitation chains, selecting weaknesses not just for severity, but for how effectively they can collapse trust, persistence, and operational control across entire platforms.” AI is now widely accessible to threat actors, but it primarily functions as a force multiplier rather than a driving force in ransomware attacks. ... Vasileios Mourtzinos, a member of the threat team at managed detection and response firm Quorum Cyber, says that more groups are moving away from high-impact encryption towards extortion-led models that prioritize data theft and prolonged, low-noise access. “This approach, popularized by actors such as Cl0p through large-scale exploitation of third-party and supply chain vulnerabilities, is now being mirrored more widely, alongside increased abuse of valid accounts, legitimate administrative tools to blend into normal activity, and in some cases attempts to recruit or incentivize insiders to facilitate access,” Mourtzinos says. ... “For CISOs, the priority should be strengthening identity controls, closely monitoring trusted applications and third-party integrations, and ensuring detection strategies focus on persistence and data exfiltration activity,” Mourtzinos advises.


Expert Maps Identity Risk and Multi-Cloud Complexity to Evolving Cloud Threats

Cavalancia began by noting that cloud adoption has fundamentally altered traditional security boundaries. With 88 percent of organizations now operating in hybrid or multi-cloud environments, the hardened network edge is no longer the primary control point. Instead, identity and privilege determine access across distributed systems. ... Discussing identity risk specifically, he underscored how central privilege is to modern attacks, saying, "If you don't have identity, you don't have identity, you don't have privilege, you don't have privilege, you don't have a threat." Excessive permissions and credential abuse create privilege escalation paths once access is obtained. ... Reducing exploitable attack paths requires prioritizing risk based on business impact. Rather than attempting to address every vulnerability equally, organizations should identify which exposures would cause the greatest operational or financial harm and focus there first. ... Looking ahead, Cavalancia argued that security must be built around continuous monitoring and identity-first principles. "Continuous monitoring, continuous validation, continuous improvement, maybe we should just have the word continuous here," he said. He also cautioned that AI-assisted attacks are already influencing the threat landscape, noting that "90% of the decisions being made by that attack were done solely by AI, no human intervention whatsoever." 


Data Centers in Space: Pi in the Sky or AI Hallucination?

Space is a great place for data centers because it solves one of the biggest problems with locating data centers on Earth: power, argues Google’s Senior Director of Paradigms of Intelligence, Travis Beals. ... SpaceX is also on board with the idea of data centers in space. Last month, it filed a request with the Federal Communications Commission to launch a constellation of up to one million solar-powered satellites that it said will serve as data centers for artificial intelligence. ... “Data centers in space can access solar power 24/7 in certain ‘sun-synchronous’ orbits, giving them all the power they need to operate without putting immense strain on power grids here on Earth,” Scherer told TechNewsWorld. “This would alleviate concerns about consumers having to bear the costs of higher energy use.” “There is also less risk of running out of real estate in space, no complex permitting requirements, and no community pushback to new data centers being built in people’s backyards,” he added. ... “By some estimates, energy and land costs are only around 25% of the total cost for a data center,” Yoon told TechNewsWorld. “AI hardware is the real cost driver, and shifting to space only makes that hardware more expensive.” “Hardware cannot be repaired or upgraded at scale in space,” he explained. “Maintaining satellites is extremely hard, especially if you have hundreds of thousands of them. Maintaining a traditional data center is extremely easy.”


Centralized Security Can't Scale. It's Time to Embrace Federation

In a federated model, the organization recognizes that technology leaders, whether from across security, IT, and Engineering, have a deep understanding of the nuances of their assigned units. Their specialized knowledge helps them set strategies that match the goals, technologies, workflows, and risks they need. That in turn leads to benefits that a centralized security authority can't touch. To start with, security decisions happen faster when the people making them are closer to the action. Service and application owners already have the context and expertise to make the right calls based on their scopes. Delegated authority allows companies to seize market opportunities faster, deploy new tools more easily, manage fewer escalations, and reduce friction and delays. ... In practice, that might look like a CISO setting data classification standards, while partner teams take responsibility for implementing these standards via low-friction policies and capabilities at the source of record for the data. Netflix's security team figured this out early. Their "Paved Roads" philosophy offers a collection of secure options that meet corporate guidelines while being the easiest for developers to use. In other words, less saying no, more offering a secure path forward. Outside of engineering, organization-wide standards also need to provide flexibility and avoid becoming overly specific or too narrow. 


Linux explores new way of authenticating developers and their code - here's how it works

Today, kernel maintainers who want a kernel.org account must find someone already in the PGP web of trust, meet them face‑to‑face, show government ID, and get their key signed. ... the kernel maintainers are working to replace this fragile PGP key‑signing web of trust with a decentralized, privacy‑preserving identity layer that can vouch for both developers and the code they sign. ... Linux ID is meant to give the kernel community a more flexible way to prove who people are, and who they're not, without falling back on brittle key‑signing parties or ad‑hoc video calls. ... At the core of Linux ID is a set of cryptographic "proofs of personhood" built on modern digital identity standards rather than traditional PGP key signing. Instead of a single monolithic web of trust, the system issues and exchanges personhood credentials and verifiable credentials that assert things like "this person is a real individual," "this person is employed by company X," or "this Linux maintainer has met this person and recognized them as a kernel maintainer." ... Technically, Linux ID is built around decentralized identifiers (DIDs). This is a W3C‑style mechanism for creating globally unique IDs and attaching public keys and service endpoints to them. Developers create DIDs, potentially using existing Curve25519‑based keys from today's PGP world, and publish DID documents via secure channels such as HTTPS‑based "did:web" endpoints that expose their public key infrastructure and where to send encrypted messages.


IT hiring is under relentless pressure. Here's how leaders are responding

The CIO's relationship with the chief human resources officer (CHRO) matters greatly, though historically, they've viewed recruitment through different lenses. HR professionals tend not to be technologists, so their approach to hiring tends to be generic. Conversely, IT leaders aren't HR professionals. Many of them were promoted to management or executive roles for their expert technical skills, not their managerial or people skills. ... The multigenerational workforce can be frustrating for everyone at times, simply because employees' lives and work experiences can be so different. While not all individuals in a demographic group are homogeneous, at a 30,000-foot view, Gen Z wants to work on interesting and innovative projects -- things that matter on a greater scale, such as climate change. They also expect more rapid advancement than previous generations, such as being promoted to a management role after a year or two versus five or seven years, for example. ... Most organizational leaders will tell you their companies have great cultures, but not all their employees would likely agree. Cultural decisions made behind closed doors by a few for the many tend to fail because too many assumptions are made, and not enough hypotheses tested. "Seeing how your job helps the company move forward has been a point of opacity for a long time, and after a certain point, it's like, 'Why am I still here?'" Skillsoft's Daly said.


Generative AI has ushered in a new era of fraud, say reports from Plaid, SEON

“Generative AI has lowered the barrier to creating fake personas, falsifying documents, and impersonating real people at scale,” says a new report from Plaid, “Rethinking fraud in the AI era.” “As a result, fraud losses are projected to reach $40 billion globally within the next few years, driven in large part by AI-enabled attacks.” The warning is familiar. What’s different about Plaid’s approach to the problem is “network insights” – “each person’s unique behavioral footprint across the broader financial and app ecosystem,” understood as a system of relationships and long-standing patterns. In these combined signals, the company says, can be found “a resilient, high-signal lens into intent, risk and legitimacy.” ... “The industry is overdue for its next wave of fraud-fighting innovation,” the report says. “The question is not whether change is needed, but what unique combination of data, insights, and analytics can meet this moment.” The AI era needs its weapon of choice, and it needs to work continuously. “AI driven fraud is exposing the limits of identity controls that were designed for point in time verification rather than continuous assurance,” says Sam Abadir, research director for risk, financial (crime & compliance) at IDC, as quoted in the Plaid report. ... The overarching message is that “AI is real, embedded and widely trusted, but it has not materially reduced the scope of fraud and AML operations.” Fraud continues to scale, enabled by the same AI boom.


The hidden cost of AI adoption: Why most companies overestimate readiness

Walk into enough leadership meetings and you’ll hear the same story told with different accents: “We need AI.” It shows up in board decks, annual strategy documents and that one slide with a hockey-stick curve that magically turns pilot into profit. ... When I talk about the hidden cost of AI adoption, I’m not talking about model pricing or vendor fees. Those are visible and negotiable. The real cost lives in the messy middle: data foundations, integration work, operating model changes, governance, security, compliance and the ongoing effort required to keep AI useful after the demo fades. ... If I had to summarize AI readiness in one sentence, it would be this: AI readiness is your organization’s ability to repeatedly take a business problem, turn it into a well-defined decision or workflow, feed it trustworthy data and ship a solution you can monitor, audit and improve. ... Having data is not the same as having usable data. AI systems amplify quality problems at scale. Until proven otherwise, “we already have the data” usually means duplicated records, inconsistent definitions, missing fields, sensitive data in the wrong places and unclear ownership. ... If it adds friction or produces unreliable outputs, adoption collapses fast. Vendor risk doesn’t disappear either. Pricing changes. Usage spikes. Workflows become coupled to tools you don’t fully control. Without internal ownership, you’re not building capability, you’re renting it.


Overcoming Security Challenges in Remote Energy Operations

The security landscape for remote facilities has shifted "dramatically," and energy providers can no longer rely on isolation for protection, said Nir Ayalon, founder and CEO of Cydome, a maritime and critical infrastructure cybersecurity firm. "These sites are just as exposed as a corporate office - but with far more complex operational challenges," Ayalon said. ... A recent PES Wind report by Cyber Energia found that only 1% of 11,000 wind assets worldwide have adequate cyber protection, while U.K.-based renewable assets face up to 1,000 attempted cyberattacks daily. Trustwave SpiderLabs also reported an 80% rise in ransomware attacks on energy and utilities in 2025, with average costs exceeding $5 million. Ransomware is the most common form of attack. ... Protecting offshore facilities is also costly and a major challenge. Sending a technician for on-site installation can run up to $200,000, including vessel rental. Ayalon said most sites lack specialized IT staff. The person managing the hardware is usually an operator or engineer and not necessarily a certified cybersecurity professional. Limited space for racks and equipment, as well as poor bandwidth poses major challenges, said Rick Kaun, global director of cybersecurity services at Rockwell Automation. ... Designing secure offshore energy systems and shipping vessels is no longer a choice but a necessity. Cybersecurity can't be an afterthought, said Guy Platten, secretary general of the International Chamber of Shipping.


How the CISO’s Role is Evolving From Technologist to Chief Educator

Regardless of structure, modern CISOs are embedded in executive decision-making, legal strategy and supply chain oversight. Their responsibilities have expanded from managing technical defenses to maintaining dynamic risk portfolios, where trade-offs must be weighed across business functions. Stakeholders now include regulators, customers and strategic partners, not just internal IT teams. ... Effective leaders accumulate knowledge and know when to go deep and when to delegate, ensuring subject-matter experts are empowered while key decisions remain aligned to business outcomes. This blend of technical insight and strategic judgment defines the CISO’s value in complex environments. ... As security becomes more embedded in daily operations, cultural leadership plays a defining role in long-term resilience. A positive cybersecurity culture is proactive and free from blame, creating an environment where employees feel safe to speak up and suggest improvements without fear of repercussions. This shift leads to earlier detection, better mitigation and stronger overall security posture. Teams asking for security input during the design phase and employees self-reporting suspicious activity signal a mature culture that understands protection is everyone’s job. ... The modern CISO operates at the intersection of technology, risk, leadership and influence. Leaders must navigate shifting business priorities and complex stakeholder relationships while building a strong security culture across the enterprise.

Daily Tech Digest - February 07, 2026


Quote for the day:

"Success in almost any field depends more on energy and drive than it does on intelligence. This explains why we have so many stupid leaders." -- Sloan Wilson



Tiny AI: The new oxymoron in town? Not really!

Could SLMs and minituarised models be the drink that would make today’s AI small enough to walk through these future doors without AI bumping into carbon-footprint issues? Would model compression tools like pruning, quantisation, and knowledge distillation help to lift some weight off the shoulders of heavy AI backyards? Lightweight models, edge devices that save compute resources, smaller algorithms that do not put huge stress on AI infrastructures, and AI that is thin on computational complexity- Tiny AI- as an AI creation and adoption approach- sounds unusual and promising at the onset. ... hardware innovations and new approaches to modelling that enable Tiny AI can significantly ease the compute and environmental burdens of large-scale AI infrastructures, avers Biswajeet Mahapatra, principal analyst at Forrester. “Specialised hardware like AI accelerators, neuromorphic chips, and edge-optimised processors reduces energy consumption by performing inference locally rather than relying on massive cloud-based models. At the same time, techniques such as model pruning, quantisation, knowledge distillation, and efficient architectures like transformers-lite allow smaller models to deliver high accuracy with far fewer parameters.” ... Tiny AI models run directly on edge devices, enabling fast, local decision-making by operating on narrowly optimised datasets and sending only relevant, aggregated insights upstream, Acharya spells out. 


Kali Linux vs. Parrot OS: Which security-forward distro is right for you?

The first thing you should know is that Kali Linux is based on Debian, which means it has access to the standard Debian repositories, which include a wealth of installable applications. ... There are also the 600+ preinstalled applications, most of which are geared toward information gathering, vulnerability analysis, wireless attacks, web application testing, and more. Many of those applications include industry-specific modifications, such as those for computer forensics, reverse engineering, and vulnerability detection. And then there are the two modes: Forensics Mode for investigation and "Kali Undercover," which blends the OS with Windows. ... Parrot OS (aka Parrot Security or just Parrot) is another popular pentesting Linux distribution that operates in a similar fashion. Parrot OS is also based on Debian and is designed for security experts, developers, and users who prioritize privacy. It's that last bit you should pay attention to. Yes, Parrot OS includes a similar collection of tools as does Kali Linux, but it also offers apps to protect your online privacy. To that end, Parrot is available in two editions: Security and Home. ... What I like about Parrot OS is that you have options. If you want to run tests on your network and/or systems, you can do that. If you want to learn more about cybersecurity, you can do that. If you want to use a general-purpose operating system that has added privacy features, you can do that.


Bridging the AI Readiness Gap: Practical Steps to Move from Exploration to Production

To bridge the gap between AI readiness and implementation, organizations can adopt the following practical framework, which draws from both enterprise experience and my ongoing doctoral research. The framework centers on four critical pillars: leadership alignment, data maturity, innovation culture, and change management. When addressed together, these pillars provide a strong foundation for sustainable and scalable AI adoption. ... This begins with a comprehensive, cross-functional assessment across the four pillars of readiness: leadership alignment, data maturity, innovation culture, and change management. The goal of this assessment is to identify internal gaps that may hinder scale and long-term impact. From there, companies should prioritize a small set of use cases that align with clearly defined business objectives and deliver measurable value. These early efforts should serve as structured pilots to test viability, refine processes, and build stakeholder confidence before scaling. Once priorities are established, organizations must develop an implementation road map that achieves the right balance of people, processes, and technology. This road map should define ownership, timelines, and integration strategies that embed AI into business workflows rather than treating it as a separate initiative. Technology alone will not deliver results; success depends on aligning AI with decision-making processes and ensuring that employees understand its value. 


Proxmox's best feature isn't virtualization; it's the backup system

Because backups are integrated into Proxmox instead of being bolted on as some third-party add-on, setting up and using backups is entirely seamless. Agents don't need to be configured per instance. No extra management is required, and no scripts need to be created to handle the running of snapshots and recovery. The best part about this approach is that it ensures everything will continue working with each OS update. Backups can be spotted per instance, too, so it's easy to check how far you can go back and how many copies are available. The entire backup strategy within Proxmox is snapshot-based, leveraging localised storage when available. This allows Proxmox to create snapshots of not only running Linux containers, but also complex virtual machines. They're reliable, fast, and don't cause unnecessary downtime. But while they're powerful additions to a hypervised configuration, the backups aren't difficult to use. This is key since it would render the backups less functional if it proved troublesome to use them when it mattered most. These backups don't have to use local storage either. NFS, CIFS, and iSCSI can all be targeted as backup locations.  ... It can also be a mixture of local storage and cloud services, something we recommend and push for with a 3-2-1 backup strategy. But there's one thing of using Proxmox's snapshots and built-in tools and a whole different ball game with Proxmox Backup Server. With PBS, we've got duplication, incremental backups, compression, encryption, and verification.


The Fintech Infrastructure Enabling AI-Powered Financial Services

AI is reshaping financial services faster than most realize. Machine learning models power credit decisions. Natural language processing handles customer service. Computer vision processes documents. But there’s a critical infrastructure layer that determines whether AI-powered financial platforms actually work for end users: payment infrastructure. The disconnect is striking. Fintech companies invest millions in AI capabilities, recommendation engines, fraud detection, personalization algorithms. ... From a technical standpoint, the integration happens via API. The platform exposes user balances and transaction authorization through standard REST endpoints. The card provider handles everything downstream: card issuance logistics, real-time currency conversion, payment network settlement, fraud detection at the transaction level, dispute resolution workflows. This architectural pattern enables fintech platforms to add payment functionality in 8-12 weeks rather than the 18-24 months required to build from scratch. ... The compliance layer operates transparently to end users while protecting platforms from liability. KYC verification happens at multiple checkpoints. AML monitoring runs continuously across transaction patterns. Reporting systems generate required documentation automatically. The platform gets payment functionality without becoming responsible for navigating payment regulations across dozens of jurisdictions.


Context Engineering for Coding Agents

Context engineering is relevant for all types of agents and LLM usage of course. My colleague Bharani Subramaniam’s simple definition is: “Context engineering is curating what the model sees so that you get a better result.” For coding agents, there is an emerging set of context engineering approaches and terms. The foundation of it are the configuration features offered by the tools, and then the nitty gritty of part is how we conceptually use those features. ... One of the goals of context engineering is to balance the amount of context given - not too little, not too much. Even though context windows have technically gotten really big, that doesn’t mean that it’s a good idea to indiscriminately dump information in there. An agent’s effectiveness goes down when it gets too much context, and too much context is a cost factor as well of course. Some of this size management is up to the developer: How much context configuration we create, and how much text we put in there. My recommendation would be to build context like rules files up gradually, and not pump too much stuff in there right from the start. ... As I said in the beginning, these features are just the foundation for humans to do the actual work and filling these with reasonable context. It takes quite a bit of time to build up a good setup, because you have to use a configuration for a while to be able to say if it’s working well or not - there are no unit tests for context engineering. Therefore, people are keen to share good setups with each other.


Reimagining The Way Organizations Hire Cyber Talent

The way we hire cybersecurity professionals is fundamentally flawed. Employers post unicorn job descriptions that combine three roles’ worth of responsibilities into one. Qualified candidates are filtered out by automated scans or rejected because their resumes don’t match unrealistic expectations. Interviews are rushed, mismatched, or even faked—literally, in some cases. On the other side, skilled professionals—many of whom are eager to work—find themselves lost in a sea of noise, unable to connect with the opportunities that align with their capabilities and career goals. Add in economic uncertainty, AI disruption and changing work preferences, and it’s clear the traditional hiring playbook simply isn’t working anymore. ... Part of fixing this broken system means rethinking what we expect from roles in the first place. Jones believes that instead of packing every security function into a single job description and hoping for a miracle, organizations should modularize their needs. Need a penetration tester for one month? A compliance SME for two weeks? A security architect to review your Zero Trust strategy? You shouldn’t have to hire full-time just to get those tasks done. ... Solving the cybersecurity workforce challenge won’t come from doubling down on job boards or resume filters. But organizations may be able to shift things in the right direction by reimagining the way they connect people to the work that matters—with clarity, flexibility and mutual trust.


News sites are locking out the Internet Archive to stop AI crawling. Is the ‘open web’ closing?

Publishers claim technology companies have accessed a lot of this content for free and without the consent of copyright owners. Some began taking tech companies to court, claiming they had stolen their intellectual property. High-profile examples include The New York Times’ case against ChatGPT’s parent company OpenAI and News Corp’s lawsuit against Perplexity AI. ... Publishers are also using technology to stop unwanted AI bots accessing their content, including the crawlers used by the Internet Archive to record internet history. News publishers have referred to the Internet Archive as a “back door” to their catalogues, allowing unscrupulous tech companies to continue scraping their content. ... The opposite approach – placing all commercial news behind paywalls – has its own problems. As news publishers move to subscription-only models, people have to juggle multiple expensive subscriptions or limit their news appetite. Otherwise, they’re left with whatever news remains online for free or is served up by social media algorithms. The result is a more closed, commercial internet. This isn’t the first time that the Internet Archive has been in the crosshairs of publishers, as the organisation was previously sued and found to be in breach of copyright through its Open Library project. ... Today’s websites become tomorrow’s historical records. Without the preservation efforts of not-for-profit organisations like The Internet Archive, we risk losing vital records.


Who will be the first CIO fired for AI agent havoc?

As CIOs deploy teams of agents that work together across the enterprise, there’s a risk that one agent’s error compounds itself as other agents act on the bad result, he says. “You have an endless loop they can get out of,” he adds. Many organizations have rushed to deploy AI agents because of the fear of missing out, or FOMO, Nadkarni says. But good governance of agents takes a thoughtful approach, he adds, and CIOs must consider all the risks as they assign agents to automate tasks previously done by human employees. ... Lawsuits and fines seem likely, and plaintiffs will not need new AI laws to file claims, says Robert Feldman, chief legal officer at database services provider EnterpriseDB. “If an AI agent causes financial loss or consumer harm, existing legal theories already apply,” he says. “Regulators are also in a similar position. They can act as soon as AI drives decisions past the line of any form of compliance and safety threshold.” ... CIOs will play a big role in figuring out the guardrails, he adds. “Once the legal action reaches the public domain, boards want answers to what happened and why,” Feldman says. ... CIOs should be proactive about agent governance, Osler recommends. They should require proof for sensitive actions and make every action traceable. They can also put humans in the loop for sensitive agent tasks, design agents to hand off action when the situation is ambiguous or risky, and they can add friction to high-stakes agent actions and make it more difficult to trigger irreversible steps, he says.


Measuring What Matters: Balancing Data, Trust and Alignment for Developer Productivity

Organizations need to take steps over and above these frameworks. It's important to integrate those insights with qualitative feedback. With the right balance of quantitative and qualitative data insights, companies can improve DevEx, increase employee engagement, and drive overall growth. Productivity metrics can only be a game-changer if used carefully and in conjunction with a consultative human-based approach to improvement. They should be used to inform management decisions, not replace them. Metrics can paint a clear picture of efficiency, but only become truly useful once you combine them with a nuanced view of the subjective developer experience. ... People who feel safe at work are more productive and creative, so taking DevEx into account when optimizing processes and designing productivity frameworks includes establishing an environment where developers can flag unrealistic deadlines and identify and solve problems together, faster. Tools, including integrated development environments (IDEs), source code repositories and collaboration platforms, all help to identify the systemic bottlenecks that are disrupting teams' workflows and enable proactive action to reduce friction. Ultimately, this will help you build a better picture of how your team is performing against your KPIs, without resorting to micromanagement. Additionally, when company priorities are misaligned, confusion and complexity follow, which is exhausting for developers, who are forced to waste their energy on bridging the gaps, rather than delivering value.

Daily Tech Digest - February 04, 2026


Quote for the day:

"The struggle you're in today is developing the strength you need for tomorrow." -- Elizabeth McCormick



A deep technical dive into going fully passwordless in hybrid enterprise environments

Before we can talk about passwordless authentication, we need to address what I call the “prerequisite triangle”: cloud Kerberos trust, device registration and Conditional Access policies. Skip any one of these, and your migration will stall before it gains momentum. ... Once your prerequisites are in place, you face critical architectural decisions that will shape your deployment for years to come. The primary decision point is whether to use Windows Hello for Business, FIDO2 security keys or phone sign-in as your primary authentication mechanism. ... The architectural decision also includes determining how you handle legacy applications that still require passwords. Your options are limited: implement a passwordless-compatible application gateway, deprecate the application entirely or use Entra ID’s smart lockout and password protection features to reduce risk while you transition. ... Start with a pilot group — I recommend between 50 and 200 users who are willing to accept some friction in exchange for security improvements. This group should include IT staff and security-conscious users who can provide meaningful feedback without becoming frustrated with early-stage issues. ... Recovery mechanisms deserve special attention. What happens when a user’s device is stolen? What if the TPM fails? What if they forget their PIN and can’t reach your self-service portal? Document these scenarios and test them with your help desk before full rollout. 


When Cloud Outages Ripple Across the Internet

For consumers, these outages are often experienced as an inconvenience, such as being unable to order food, stream content, or access online services. For businesses, however, the impact is far more severe. When an airline’s booking system goes offline, lost availability translates directly into lost revenue, reputational damage, and operational disruption. These incidents highlight that cloud outages affect far more than compute or networking. One of the most critical and impactful areas is identity. When authentication and authorization are disrupted, the result is not just downtime; it is a core operational and security incident. ... Cloud providers are not identity systems. But modern identity architectures are deeply dependent on cloud-hosted infrastructure and shared services. Even when an authentication service itself remains functional, failures elsewhere in the dependency chain can render identity flows unusable. ... High availability is widely implemented and absolutely necessary, but it is often insufficient for identity systems. Most high-availability designs focus on regional failover: a primary deployment in one region with a secondary in another. If one region fails, traffic shifts to the backup. This approach breaks down when failures affect shared or global services. If identity systems in multiple regions depend on the same cloud control plane, DNS provider, or managed database service, regional failover provides little protection. In these scenarios, the backup system fails for the same reasons as the primary.


The Art of Lean Governance: Elevating Reconciliation to Primary Control for Data Risk

In today's environment comprising of continuous data ecosystems, governance based on periodic inspection is misaligned with how data risk emerges. The central question for boards, regulators, auditors, and risk committees has shifted: Can the institution demonstrate at the moment data is used that it is accurate, complete, and controlled? Lean governance answers this question by elevating data reconciliation from a back-office cleanup activity to the primary control mechanism for data risk reduction. ... Data profiling can tell you that a value looks unusual within one system. It cannot tell you whether that value aligns with upstream sources, downstream consumers, or parallel representations elsewhere in the enterprise.  ... Lean governance reframes governance as a continual process-control discipline rather than a documentation exercise. It borrows from established control theory: Quality is achieved by controlling the process, not by inspecting outputs after failures. Three principles define this approach: Data risk emerges continuously, not periodically; Controls must operate at the same cadence as data movement; and Reconciliation is the control that proves process integrity. ... Data profiling is inherently inward-looking. It evaluates distributions, ranges, patterns, and anomalies within a single dataset. This is useful for hygiene, but insufficient for assessing risk. Reconciliation is inherently relational. It validates consistency between systems, across transformations, and through the lifecycle of data.


Working with Code Assistants: The Skeleton Architecture

Critical non-functional requirements- such as security, scalability, performance, and authentication- are system-wide invariants that cannot be fragmented. If every vertical slice is tasked with implementing its own authorization stack or caching strategy, the result is "Governance Drift": inconsistent security postures and massive code redundancy. This necessitates a new unifying concept: The Skeleton and The Tissue. ... The Stable Skeleton represents the rigid, immutable structures (Abstract Base Classes, Interfaces, Security Contexts) defined by the human although possibly built by the AI. The Vertical Tissue consists of the isolated, implementation-heavy features (Concrete Classes, Business Logic) generated by the AI. This architecture draws on two classical approaches: actor models and object-oriented inversion of control. It is no surprise that some of the world’s most reliable software is written in Erlang, which utilizes actor models to maintain system stability. Similarly, in inversion of control structures, the interaction between slices is managed by abstract base classes, ensuring that concrete implementation classes depend on stable abstractions rather than the other way around. ... Prompts are soft; architecture is hard. Consequently, the developer must monitor the agent with extreme vigilance. ... To make the "Director" role scalable, we must establish "Hard Guardrails"- constraints baked into the system that are physically difficult for the AI to bypass. These act as the immutable laws of the application.


8-Minute Access: AI Accelerates Breach of AWS Environment

A threat actor gained initial access to the environment via credentials discovered in public Simple Storage Service (S3) buckets and then quickly escalated privileges during the attack, which moved laterally across 19 unique AWS principals, the Sysdig Threat Research Team (TRT) revealed in a report published Tuesday. ... While the speed and apparent use of AI were among the most notable aspects of the attack, the researchers also called out the way that the attacker accessed exposed credentials as a cautionary tale for organizations with cloud environments. Indeed, stolen credentials are often an attacker's initial access point to attack a cloud environment. "Leaving access keys in public buckets is a huge mistake," the researchers wrote. "Organizations should prefer IAM roles instead, which use temporary credentials. If they really want to leverage IAM users with long-term credentials, they should secure them and implement a periodic rotation." Moreover, the affected S3 buckets were named using common AI tool naming conventions, they noted. The attackers actively searched for these conventions during reconnaissance, enabling them to find the credentials quite easily, they said. ... During this privilege-escalation part of the attack — which took a mere eight minutes — the actor wrote code in Serbian, suggesting their origin. Moreover, the use of comments, comprehensive exception handling, and the speed at which the script was written "strongly suggests LLM generation," the researchers wrote.


Ask the Experts: The cloud cost reckoning

According to the 2025 Azul CIO Cloud Trends Survey & Report, 83% of the 300 CIOs surveyed are spending an average of 30% more than what they had anticipated for cloud infrastructure and applications; 43% said their CEOs or boards of directors had concerns about cloud spend. Moreover, 13% of surveyed CIOs said their infrastructure and application costs increased with their cloud deployments, and 7% said they saw no savings at all. Other surveys show CIOs are rethinking their cloud strategies, with "repatriation" -- moving workloads from the cloud back to on-premises -- emerging as a viable option due to mounting costs. ... "At Laserfiche we still have a hybrid environment. So we still have a colocation facility, where we house a lot of our compute equipment. And of course, because of that, we need a DR site because you never want to put all your eggs in that one colo. We also have a lot of SaaS services. We're in a hyperscaler environment for Laserfiche cloud. "But the reason why we do both is because it actually costs us less money to run our own compute in a data center colo environment than it does to be all in on cloud." ,,, "The primary reason why the [cloud] costs have been increasing is because our use of cloud services has become much more sophisticated and much more integrated. "But another reason cloud consumption has increased is we're not as diligent in managing our cloud resources in provisioning and maintaining."


NIST develops playbook for online use cases of digital credentials in financial services

The objective is to develop what a panel description calls a “playbook of standards and best practices that all parties can use to set a high bar for privacy and security.” “We really wanted to be able to understand, what does it actually take for an organization to implement this stuff? How does it fit into workflows? And then start to think as well about what are the benefits to these organizations and to individuals.” “The question became, what was the best online use case?” Galuzzo says. “At which point our colleagues in Treasury kind of said, hey, our online banking customer identification program, how do we make that both more usable and more secure at the same time? And it seemed like a really nice fit. So that brought us to both the kind of scope of what we’re focused on, those online components, and the specific use case of financial services as well.” ... The model, he says, “should allow you to engage remotely, to not have to worry about showing up in person to your closest branch, should allow for a reduction in human error from our side and should allow for reduction in fraud and concern over forged documents.” It should also serve to fulfil the bank’s KYC and related compliance requirements. Beyond the bank, the major objective with mDLs remains getting people to use them. The AAMVA’s Maru points to his agency’s digital trust service, and to its efforts in outreach and education – which are as important in driving adoption as anything on the technical side. 


Designing for the unknown: How flexibility is reshaping data center design

Rapid advances in compute architectures – particularly GPUs and AI-oriented systems – are compressing technology cycles faster than many design and delivery processes can respond. In response, flexibility has shifted from a desirable feature to the core principle of successful data center design. This evolution is reshaping how we think about structure, power distribution, equipment procurement, spatial layout, and long-term operability. ... From a design perspective, this means planning for change across several layers: Structural systems that can accommodate higher equipment loads without reinforcement; Spatial layouts that allow reconfiguration of white space and service zones; and Distribution pathways that support future modifications without disrupting live operations. The objective is not to overbuild for every possible scenario, but to provide a framework that can absorb change efficiently and economically. ... Another emerging challenge is equipment lead time. While delivery periods vary by system, generators can now carry lead times approaching 12 months, particularly for higher capacities, while other major infrastructure components – including transformers, UPS modules, and switchgear – typically fall within the 30- to 40-week range. Delays in securing these items can introduce significant risk when procurement decisions are deferred until late in the design cycle.


Onboarding new AI hires calls for context engineering - here's your 3-step action plan

In the AI world, the institutional knowledge is called context. AI agents are the new rockstar employees. You can onboard them in minutes, not months. And the more context that you can provide them with, the better they can perform. Now, when you hear reports that AI agents perform better when they have accurate data, think more broadly than customer data. The data that AI needs to do the job effectively also includes the data that describes the institutional knowledge: context. ... Your employees are good at interpreting it and filling in the gaps using their judgment and applying institutional knowledge. AI agents can now parse unstructured data, but are not as good at applying judgment when there are conflicts, nuances, ambiguity, or omissions. This is why we get hallucinations. ... The process maps provide visibility into manual activities between applications or within applications. The accuracy and completeness of the documented process diagrams vary wildly. Front-office processes are generally very poor. Back-office processes in regulated industries are typically very good. And to exploit the power of AI agents, organizations need to streamline them and optimize their business processes. This has sparked a process reengineering revolution that mirrors the one in the 1990s. This time around, the level of detail required by AI agents is higher than for humans.


Q&A: How Can Trust be Built in Open Source Security?

The security industry has already seen examples in 2025 of bad actors deploying AI in cyberattacks – I’m concerned that 2026 could bring a Heartbleed- or Log4Shell-style incident involving AI. The pace at which these tools operate may outstrip the ability of defenders to keep up in real time. Another focus for the year ahead: how the Cyber Resilience Act (CRA) will begin to reshape global compliance expectations. Starting in September 2026, manufacturers and open source maintainers must report exploited vulnerabilities and breaches to the EU. This is another step closer to CRA enforcement and other countries like Japan, India and Korea are exploring similar legislation. ... The human side of security should really be addressed just as urgently as the technical side. The way forward involves education, tooling and cultural change. Resilient human defences start with education. Courses from the Linux Foundation like Developing Secure Software and Secure AI/ML‑Driven Software Development equip users with the mindset and skills to make better decisions in an AI‑enhanced world. Beyond formal training, reinforcing awareness creating a vigilant community is critical. The goal is to embed security into culture and processes so that it’s not easily overlooked when new technology or tools roll around. ... Maintainers and the community projects they lead are struggling without support from those that use their software.

Daily Tech Digest - December 29, 2025


Quote for the day:

"What great leaders have in common is that each truly knows his or her strengths - and can call on the right strength at the right time." -- Tom Rath


Beyond automation: Physical AI ushers in a new era of smart machines

“Physical AI has reached a critical inflection point where technical readiness aligns with market demand,” said James Davidson, chief artificial intelligence officer at Teradyne Robotics, a leader in advanced robotics solutions. “The market dynamics have shifted from skepticism to proof. Early adopters are reporting tangible efficiency and revenue gains, and we’ve entered what I’d characterize as the early-majority phase of adoption, where investment scales dramatically.” ... To train and prepare these models, a new specialized class of AI model emerged: World Foundation Models. WFMs serve two primary functions for robotics AI: They enable engineers to develop vast synthetic datasets rapidly to train robots on unseen actions, and they test these robots in virtual environments before real-world deployment. WFMs allow developers to create virtual training grounds that mimic reality through “digital twins” of environments. Within these simulated scenes, robots learn to navigate real-world challenges safely and at a pace far exceeding what physical presence would permit. ... Despite grabbing a lot of headlines, humanoid robots only represent a small fraction of AI robotics deployments. For now, it’s collaborative robots, robotic arms and autonomous mobile robots that are transforming warehouse and factory settings. The forefront example is Amazon.com Inc., which uses intelligent robots across its warehouses. 


When Digital Excellence Turns Into Strategic Technical Debt

Asian Paints' digital architecture was built for a world that valued scale, predictability and discipline. Its systems continuously optimize for efficiency, minimize variability and ensure consistency across thousands of dealers and SKUs. For nearly 20 years, these capabilities have directly contributed to better margins, improved service levels and increased shareholder confidence. But today's market is different. New entrants, backed by capital and "largely free from legacy" process constraints, are willing to accept inefficiencies to gain market share quickly. ... The result is a market that is more volatile, more tactical, and less patient. Additionally, new technology plays a vital role in creating a competitive edge. This is where the strategic technical debt surfaces. Unlike traditional technical debt, this isn't about outdated systems or underinvestment. ... The difference lies in architecture and intent. Newer players are born cloud-native, with a more modular approach, better governance and greater tolerance for experimentation. They use analytics and AI proactively to adjust incentives quickly, test local pricing strategies and pivot dealer engagement models in response to demand. Speed and flexibility matter more than optimization. ... Strategic technical debt accumulates because CIOs are rewarded for stability, uptime and optimization. Optionality, speed and the ability to unlearn don't appear on scorecards. Over time, this imbalance becomes part of the architecture and results in digital stress.


The Evolution of North Korea – And What To Expect In 2026

What has changed most notably through 2024 and 2025 is the shift away from “purely external intrusion” towards “abuse of legitimate access,” says Pontiroli. “Rather than breaking in, North Korean operators increasingly aim to be hired as remote IT workers inside real companies, gaining steady income, trusted network access, and the option to pivot into espionage, data theft, or follow on attacks.” ... The workers claim to be US based with IT experience, “but in reality, they are North Korean or proxied by North Korean networks,” he explains. Over time, the threat actors have developed deep expertise in software engineering, mobile applications, blockchain infrastructure, and cryptocurrency ecosystems says Tom Hegel, distinguished threat researcher, SentinelLABS. ... In parallel, cybersecurity researchers have observed related campaigns with distinct names and tradecraft. A malicious campaign dubbed Contagious Interview involves threat actors masquerading as recruiters or employers to lure job seekers, particularly in tech and cryptocurrency sectors, into fake interviews that deliver malware such as BeaverTail, InvisibleFerret, and variants such as OtterCookie, says Pontiroli. ... Today, fake worker schemes remain an “active and growing threat,” says Jack. KnowBe4 offers training to customers to combat this and strengthen their security culture, he says. Security leaders must assume that the hiring pipeline itself is part of the attack surface, says Hegel. 


Five Attack-Surface Management Trends to Watch in 2026

In 2026, regulators will anchor security and risk leaders’ approaches to exposure strategy. This will mean not only demonstrating due diligence during annual audits, but also demonstrating proof of resilience every day. Exposure management platforms that can map external assets against regulatory expectations; provide real-time compliance dashboards and metrics; and quantify benefits and exposures to boardrooms will become table stakes. ... Attackers see the enterprise as a single, unified attack surface, with each constituent part informing the next priority: cloud workloads, SaaS, subsidiaries, shadow IT, and third-party dependencies. In 2026, savvy security leaders will be adopting that same perspective. Point-in-time, penetration-test-style engagements and bug-bounty programs will give way to organizations that expect full-scope, attacker-centric discovery of digital asset footprints, as well as automated prioritization to cut through the noise.  ... In 2026, successful vendor choices will be those that strike a balance between consolidation and integration. Enterprises will demand more flexible integration into existing workflows, including third-party APIs and visibility into SIEM, SOAR, and GRC tools, as well as the ability to support hybrid and multi-cloud environments without friction. Transparency and visibility into roadmap, enterprise-readiness proofs, and customer success will become significant differentiators in a category that has been defined by mergers and acquisitions.


Daon outlines five digital identity shifts for 2026

Daon said non-human identities, including agentic AI systems, are expanding quickly across enterprise networks. It cited independent 2025 studies reporting roughly 44% year-on-year growth in non-human identities and a rise in machine-to-human ratios from around 80:1 to 144:1 in some environments. The prediction for 2026 is that enterprises will treat autonomous and agentic systems as full participants in the identity lifecycle. These systems would be registered, authenticated, authorised and monitored under formal policies, with containment processes defined in case of compromise or misbehaviour. ... Daon said progress in techniques such as zero-knowledge proofs, federated learning and sensor attestation now enables biometric checks on personal devices while reducing movement of raw biometric data. On-device processing can bind verification to a specific capture environment and lower the risk of replay or injection. Local storage of biometric templates supports data-minimisation approaches. The company expects these on-device checks to align with proof-of-possession flows and hardware-backed sensor attestations. It said federated learning and zero-knowledge techniques allow systems to validate claims without sharing underlying biometric templates with servers. ... Daon expects continued pressure on pre-hire verification because of deepfake applicants and impersonation. It said the more significant change in 2026 will come after hiring as employers adopt continuous workforce assurance.


Quantum computing made measurable progress toward real-world use in 2025

Fully functional quantum computers remain out of reach, but optimism across the field is rising. At the Q2B Silicon Valley conference in December, researchers and executives pointed to a year marked by tangible progress – particularly in hardware performance and scaling – and a growing belief that quantum advantage for real-world problems may be achievable sooner than expected. "More people are getting access to quantum computers than ever before, and I have a suspicion that they'll do things with them that we could never even think of," said Jamie Garcia at IBM. ... Aaronson, long known for his critical analysis of claims in quantum computing, described the progress in qubit fidelity and control systems as "spectacular." However, he cautioned that new algorithms remain essential for converting that hardware performance into practical value. While technical strides have been impressive, translating those advances into applications remains difficult. Ryan Babbush of Google Quantum AI said hardware continues to outpace software in usefulness. ... Dutch startup QuantWare introduced an architecture aimed at solving one of the industry's most significant hardware limitations: scaling up without losing reliability. The company's superconducting quantum processor design targets 10,000 qubits, roughly 100 times more than today's leading devices. QuantWare's Matt Rijlaarsdam said the first systems of this size could be operational within 2.5 years.


Ship Reliable AI: 7 Painfully Practical DevOps Moves

In AI land, “what changed” is anything that teaches or nudges the model: training data slices, prompt templates, system instructions, retrieval schemas, embeddings pipelines, tokenizer versions, and the model binary itself. We treat each as code. Prompts live next to code with unit tests. We commit small evaluation sets in-repo for quick signals, and keep larger benchmarks in object storage with content hashes and a manifest. ... Shiny demos hide flaky edges. We force those edges to show up in CI, where they’re cheap. Our pipeline runs fast unit tests, a tiny evaluation suite, and a couple of safety checks against handcrafted adversarial prompts. The goal isn’t to solve safety in CI; it’s to block footguns. We test the glue code around the model, we lint prompts for hard-to-diff formatting changes, and we run a 50-example eval that catches obvious regressions in latency, grounding, and accuracy. ... For AI pods, that starts with resource quotas and limits. GPU nodes are expensive; “just one more experiment” can melt the budget by lunch. We set namespace-level quotas for GPU and memory, and we stop requests that try to sneak past. For egress, we deny everything and allow only the API endpoints our apps need. When someone tries to point a staging pod at a random external endpoint “just to test it,” the policy does the talking.


What support is available for implementing Agentic AI systems

The adoption of Agentic AI systems is reshaping the way organizations implement security measures, particularly for NHIs. Agentic AI—capable of self-directed learning and decision-making—proves advantageous in deploying security protocols that adapt in real-time to evolving threats. By utilizing such technology, organizations can leverage data-driven insights to enhance their NHI management strategies. ... Given the critical role of NHIs in maintaining robust cloud security, organizations need to adopt advanced methodologies that integrate seamlessly with their existing security frameworks. ... Effective NHI management relies heavily on leveraging insights that stem from analyzing large data sets. Organizations that prioritize the use of data analytics in their cybersecurity strategies can efficiently discover, classify, and monitor machine identities and their associated secrets. Advanced analytical tools can help security teams identify patterns and anomalies in system activities, providing early indicators of potential security threats. These insights make it possible to implement more effective security protocols and prevent unauthorized access before it happens. ... The security of an organization is not solely the responsibility of the IT department; it is a shared responsibility across all stakeholders. Building a culture of security awareness is crucial in ensuring that every member of an organization understands the role that NHIs play in cybersecurity.


Godspeed curtain twitchers: DPDP and its peers just got ruthless

Organisations will have to work on privacy very seriously- in everyday business operations and in every area, Bhambry cautions. They will have to make sure it pervades product development, processes (From the onset), internal audit, regular training and the very culture of that company and its employees. Enterprises will have to focus on individual rights, consent protocols and data governance.” There is no doubt that data privacy is going to get stronger, transparent, and comprehensive, affirms Advocate Dr. Bhavna Sharma, Delhi High Court. Cybercrime Expert and Legal Consultant, Delhi Police and a techno-legal policy professional. But it is also going to get complex in 2026 as it shifts from abstract legal principles to a tangible operational mandate with the notification of the DPDPA Rules, 2025, adds Dr. Sharma ... “India’s DPDPA and MeitY’s localisation mandates echo a growing consensus that data sovereignty equals digital sovereignty. Governments are recognising that control over citizen data is foundational to national security and economic resilience.” Cheema explains. In an era marked by competition among nations with their own data systems, state leaders are taking control, Yadav observes. “They are not willing to allow strategic assets to slip through their fingers. And as a result, the government calls for ‘localisation’ to trap extra-territorial storage simply because it has yet to be regulated by authorities in those countries.


Tech innovations fuelling Indian GCCs as BFSI powerhouses

Responsible AI governance, model explainability, and auditability remain difficult across regulated domains worldwide. Institutions everywhere also face constraints around scalable compute, high-quality data flows, and real-time analytics. As AI systems process more sensitive financial data, cybersecurity risks are rising across the industry, prompting greater investment in zero-trust architectures, model-security testing, and stronger third-party controls. ... GCCs in India have been instrumental in orchestrating cloud migrations for complex banking systems, allowing banks and insurers to transition from monolithic legacy systems toward microservices and API-led platforms. This modular architecture has enabled financial institutions to launch products rapidly and build disaster resilience. Additionally, regulatory complexity and rising compliance costs have created a fertile ground for RegTech innovation. Indian GCCs are helping global enterprises build AI-powered KYC and Anti-Money Laundering (AML) solutions, compliance dashboards, and automated regulatory reporting pipelines that reduce manual work and false positives and make audits more efficient. ... Security, observability, and governance have also become board-level priorities. According to industry insights, as GCCs ingest more sensitive financial data and run mission-critical AI models, investments in cyber-resilience, third-party access monitoring, and federated data controls have surged.

Daily Tech Digest - December 22, 2025


Quote for the day:

"Life isn’t about getting and having, it’s about giving and being." -- Kevin Kruse



Browser agents don’t always respect your privacy choices

A key issue is the location of the language model. Seven out of eight agents use off device models. This means detailed information about the user’s browser state and each visited webpage is sent to servers controlled by the service provider. When the model runs on remote servers, users lose control over how search queries and sensitive webpage content are processed and stored. While some providers describe limits on data use, users must rely on service provider policies. Browser version age is another factor. Browsers release frequent updates to patch security flaws. One agent was found running a browser that was 16 major versions out of date at the time of testing. ... Agents also showed weaknesses in TLS certificate handling. Two agents did not show warnings for revoked certificates. One agent also failed to warn users about expired and self signed certificates. Trusting connections with invalid certificates leaves agents open to machine-in-the-middle attacks that allow attackers to read or alter submitted information. ... Agent decision logic sometimes favored task completion over protecting user information, leading to personal data disclosure. This resulted in six vulnerabilities. Researchers supplied agents with a fictitious identity and observed whether that information was shared with websites under different conditions. Three agents disclosed personal information during passive tests, where the requested data was not required to complete the task. 


What CISOs should know about the SolarWinds lawsuit dismissal

For many CISOs, the dismissal landed not as an abstract legal development, but as something deeply personal. ... Even though the SolarWinds case sparked a deeper recognition that cybersecurity responsibility should be a shared responsibility across enterprises, shifting policy priorities and future administrations could once again put CISOs in the SEC’s crosshairs, they warn. ... The judge’s reasoning reassured many security leaders, but it also exposed a more profound discomfort about how accountability is assigned inside modern organizations. “The area that a lot of us were really uncomfortable about was the idea that an operational head of security could be personally responsible for what the company says about its cybersecurity investments,” Sullivan says. He adds, “Tim didn’t have the CISO title before the incident. And so there was just a lot there that made security people very concerned. Why is this operational person on the hook for representations?” But even if he had had the CISO role before the incident, the argument still holds, according to Sullivan. “Historically, the person who had that title wasn’t a quote-unquote ‘chief’ in the sense that they’re not in the little room of people who run the company,” Sullivan says. ... If the SolarWinds case clarified anything, it’s that relief is temporary and preparation is essential. CISOs have a window of opportunity to shore up their organizational and personal defenses in the event the political pendulum swings and makes CISOs litigation targets again.


Global uncertainty is reshaping cloud strategies in Europe

Europe has been debating digital sovereignty for years, but the issue has gained new urgency amid rising geopolitical tensions. “The political environment is changing very fast,” said Ollrom. A combination of trade disputes, sanctions that affect access to technology, and the possibility of tariffs on digital services has prompted many European organizations to reconsider their reliance on US hyperscaler clouds. ... What was once largely a public-sector concern now attracts growing interest across a wide range of private organizations as well. Accenture is currently working with around 50 large European organizations on digital-sovereignty-related projects, said Capo. This includes banks, telcos, and logistics companies alongside clients in government and defense. ... Another worry is the possibility that cloud services will be swept up in future trade disputes. If the EU imposes retaliatory tariffs on digital services, the cost of using hyperscaler cloud platforms could hike overnight, and organizations heavily dependent on them may find it hard to switch to a cheaper option. There’s also the prospect that organizations could lose access to cloud services if sanctions or export restrictions are imposed, leaving them temporarily or permanently locked out of systems they rely on. It’s a remote risk, said Dario Maisto, a senior analyst at Forrester, but a material one. “We are talking of a worst-case scenario where IT gets leveraged as a weapon,” he said.


What the AWS outage taught CIOs about preparedness

For many organizations, the event felt like a cyber incident even though it wasn’t, but it raised a difficult question for CIOs about how to prepare for a disruption that lives outside your infrastructure, yet carries the same operational and reputational consequences as a security breach. ... Beyond strong cloud architecture, “Preparedness is the real differentiator,” he says. “Even the best technology teams can’t compensate for gaps in scenario planning, coordination, and governance.” ... Within Deluxe, disaster recovery tests historically focused on applications the company controlled, while cyber tabletops focused on simulated intrusions. The AWS outage exposed the gap between those exercises and real-world conditions. Shifting its applications from AWS East to AWS West was swift, and the technology team considered the recovery a success. Yet it was far from business as usual, as developers still couldn’t access critical tools like GitHub or Jira. “We thought we’d recovered, but the day-to-day work couldn’t continue because the tools we depend on were down,” he says. ... In a well-architected hybrid cloud setup, he says resilience is more often a coordination problem than a spending problem, and distributing workloads across two cloud providers doesn’t guarantee better outcomes if the clouds rely on the same power grid, or experience the same regional failure event. ... Jayaprakasam is candid about the cultural challenge that comes with resilience work. 


Winning the density war: The shift from RPPs to scalable busway infrastructure in next-gen facilities

“Four or five years ago, we were seeing sub-ten-kilowatt racks, and today we're being asked for between 100 and 150 kilowatts, which makes a whole magnitude of difference,” says Osian. “And this trend is going to continue to rise, meaning we have to mobilize for tomorrow’s power challenges, today.” Rising power demands also require higher available fault currents to safely handle larger, more dynamic surges in the circuit. Supporting equipment must be more resilient and reliable to maintain safe and efficient distribution. With change happening so quickly, adopting a long-term strategy is essential. This requires building critical infrastructure with adaptability and flexibility at its core. ... A modular approach offers another tactical advantage: speed. With a traditional RPP setup, getting power physically hooked up from A to B on a per-rack basis is time and resource-consuming, especially at first installation. By reducing complexity with a plug-and-play modular design slotted in directly over the racks, the busway delivers the swift reinforcements modern facilities need to stay ahead. ... “One of the advancements we've made in the last year is creating a way for users to add a circuit from outside the arc flash boundary. While the Starline busway is already rated for live insertion – meaning it’s safe out of the box – we’ve taken safety to the next level with a device called the Remote Plugin Actuator. It allows a user to add a circuit to the busway without engaging any of the electrical contacts directly.”


Building a data-driven, secure and future-ready manufacturing enterprise: Technology as a strategic backbone

A central pillar of Prince Pipes and Fittings’ digital strategy is data democratisation. The organisation has moved decisively away from static reports towards dynamic, self-service analytics. A centralised data platform for sales and supply chain allows business users to create their own dashboards without dependence on IT teams. Desai further states, “Sales teams, for instance, can access granular data on their smartphones while interacting with customers, instantly showcasing performance metrics and trends. This empowerment has not only improved responsiveness but has also enhanced user confidence and satisfaction. Across functions, data is now guiding actions rather than merely describing outcomes.” ... Technology transformation at Prince Pipes and Fittings has been accompanied by a conscious effort to drive cultural change. Leadership recognised early that democratising data would require a mindset shift across the organisation. Initial resistance was addressed through structured training programs conducted zone-wise and state-wise, helping users build familiarity and confidence with new platforms. ... Cyber security is treated as a business-critical priority at Prince Pipes and Fittings. The organisation has implemented a phase-wise, multi-layered cyber security framework spanning both IT and OT environments. A simple yet effective risk-classification approach i.e. green, yellow, and red, was used to identify gaps and prioritise actions. ... Equally important has been the focus on human awareness. 


The Next Fraud Problem Isn’t in Finance. It’s in Hiring: The New Attack Surface

The uncomfortable truth is that the interview has become a transaction. And the “asset” being transferred is not a paycheck. It’s access: to systems, data, colleagues, customers, and internal credibility. ... Payment fraud works because the system is trying to be fast. The same is true in hiring. Speed is rewarded. Friction is avoided. And that creates a predictable failure mode: an attacker’s job is to make the process feel normal long enough to get to “approved.” In payments, fraudsters use stolen cards and compromised accounts. In hiring, they can use stolen faces, voices, credentials, and employment histories. The mechanics differ, but the objective is identical: get the system to say yes. That’s why the right question for leaders is not, “Can we spot a deepfake?” It’s, “What controls do we have before we grant access?” ... Many companies verify identity late, during onboarding, after decisions are emotionally and operationally “locked.” That’s the equivalent of shipping a product and hoping the card wasn’t stolen. Instead, introduce light identity proofing before final rounds or before any access-related steps. ... In payments, the critical moment is authorization. In hiring, it’s when you provision accounts, ship hardware, grant repository permissions, or provide access to customer or financial systems. That moment deserves a deliberate gate: confirm identity through a known-good channel, verify references without relying on contact info provided by the candidate, and run a final live verification step before credentials are issued. 


Agent autonomy without guardrails is an SRE nightmare

Four-in-10 tech leaders regret not establishing a stronger governance foundation from the start, which suggests they adopted AI rapidly, but with margin to improve on policies, rules and best practices designed to ensure the responsible, ethical and legal development and use of AI. ... When considering tasks for AI agents, organizations should understand that, while traditional automation is good at handling repetitive, rule-based processes with structured data inputs, AI agents can handle much more complex tasks and adapt to new information in a more autonomous way. This makes them an appealing solution for all sorts of tasks. But as AI agents are deployed, organizations should control what actions the agents can take, particularly in the early stages of a project. Thus, teams working with AI agents should have approval paths in place for high-impact actions to ensure agent scope does not extend beyond expected use cases, minimizing risk to the wider system. ... Further, AI agents should not be allowed free rein across an organization’s systems. At a minimum, the permissions and security scope of an AI agent must be aligned with the scope of the owner, and any tools added to the agent should not allow for extended permissions. Limiting AI agent access to a system based on their role will also ensure deployment runs smoothly. Keeping complete logs of every action taken by an AI agent can also help engineers understand what happened in the event of an incident and trace back the problem. 


Where Architects Sit in the Era of AI

In the emerging AI-augmented ecosystem, we can think of three modes of architect involvement: Architect in the loop, Architect on the loop, and Architect out of the loop. Each reflects a different level of engagement, oversight, and trust between an Architect and intelligent systems. ... What does it mean to be in the loop? In the Architect in the Loop (AITL) model, the architect and the AI system work side by side. AI provides options, generates designs, or analyzes trade-offs, but humans remain the decision-makers. Every output is reviewed, contextualized, and approved by an architect who understands both the technical and organizational context. This is where the Architect is sat in the middle of AI interactions ... What does it mean to be on the loop? As AI matures, parts of architectural decision-making can be safely delegated. In the Architect on the Loop (AOTL) model, the AI operates autonomously within predefined boundaries, while the architect supervises, reviews, and intervenes when necessary. This is where the architect is firmly embedded into the development workflow using AI to augment and enhance their own natural abilities. ... What does it mean to be out of the loop? In the AOOTL model, we see a world where the architect is no longer required in the traditional fashion. The architectural work of domain understanding, context providing, and design thinking is simply all done by AI, with the outputs of AI being used by managers, developers, and others to build the right systems at the right time.


Cloud Migration of Microservices: Strategy, Risks, and Best Practices

The migration of microservices to the cloud is a crucial step in the digital transformation process, requiring a strategic approach to ensure success. The success of the migration depends on carefully selecting the appropriate strategy based on the current architecture's maturity, technical debt, business objectives, and cloud infrastructure capabilities. ... The simplest strategy for migrating to the cloud is Rehost. This involves moving applications as is to virtual machines in the cloud. According to research, around 40% of organizations begin their migration with Rehost, as it allows for a quick transition to the cloud with minimal costs. However, this approach often does not provide significant performance or cost benefits, as it does not fully utilize cloud capabilities. Replatform is the next level of complexity, where applications are partially adapted. For example, databases may be migrated to cloud services like Amazon RDS or Azure SQL, file storage may be replaced, and containerization may be introduced. Replatform is used in around 22% of cases where there is a need to strike a balance between speed and the depth of changes. A more time-consuming but strategically beneficial approach is Refactoring (or Rearchitecting), in which the application undergoes a significant redesign: microservices are introduced, Kubernetes, Kafka, and cloud functions (such as Lambda and Azure Functions) are utilized, as well as a service bus.