Daily Tech Digest - September 30, 2025


Quote for the day:

"There is only one success – to be able to spend your life in your own way." -- Christopher Morley


Smoothing out AI’s rough edges

When data agents fail, they often fail silently—giving confident-sounding answers that are wrong, and it can be hard to figure out what caused the failure.” He emphasizes systematic evaluation and observability for each step an agent takes, not just end-to-end accuracy. We may like the term “vibe coding,“ but smart developers are forcing the rigor of unit tests, traces, and health checks for agent plans, tools, and memory. ... The teams that win treat knowledge as a product. They build structured corpora, sometimes using agents to lift entities and relations into a lightweight graph. They grade their RAG systems like a search engine: on freshness, coverage, and hit rate against a golden set of questions. Chunking isn’t just a library default; it’s an interface that needs to be designed with named hierarchies, titles, and stable IDs. ... It’s not without complications, though, and there’s a risk of too much delegation. As Valdarrama quips, “letting AI write all of my code is like paying a sommelier to drink all of my wine.” In other words, use the machine to accelerate code you’d be willing to own; don’t outsource judgment. In practice, this means developers must tighten the loop between AI-suggested diffs and their CI and enforce tests on any AI-generated changes, blocking merges on red builds ... We‘re not just talking about traditional vulnerabilities. 


The EU AI Act: From the experts themselves

For businesses deploying AI systems, the cost of non-compliance is steep: penalties of up to €35 million or 7% of global turnover are on the table. But some experts believe the real challenge lies in how this framework interacts with competing global approaches. As Darren Thomson, field CTO EMEAI at Commvault, points out, “The EU AI Act is a comprehensive, legally binding framework that clearly prioritises regulation of AI, transparency, and prevention of harm.” ... “The EU AI Act has a clear common purpose to reduce the risk to end users. By prohibiting a range of high-risk applications of AI techniques, the risk of unethical surveillance and other means of misuse is certainly mitigated.” The requirement for impact assessments on high-risk systems isn’t a tick-box exercise and under the Act, organisations deploying high-impact AI systems must carry out rigorous risk assessments before those systems can reach end users. ... Businesses building or deploying AI systems in the EU can’t afford to ignore the AI Act. Understanding risk level and assessing whether your use of AI falls into a high-risk category is a crucial first step to compliance. Companies must also prepare for scrutiny, this is best done by documenting AI systems, auditing them regularly, and staying prepared to conduct impact assessments. 


Stop drifting through AI transformation: The design principles that actually work

If drift is our danger, then design must be our answer. But design cannot begin with blueprints alone. Tempting as it is, we cannot jump to solutions. ... The principles need to address fundamental issues to ensure that intelligent systems are designed and implemented to protect human value and values. Doing so, we must address several questions. How do we preserve human worth? How do we maintain diverse perspectives? How do we ensure accountability? How do we keep humans in control? ... The first and possibly most critical of these is that human dignity must be celebrated and not sacrificed for efficiency. There is a strong temptation to use AI to view people as overhead, processes as bottlenecks, and care as inefficiency. ... The second compass is pluralism over uniformity. Intelligent systems already threaten to divide us into private realities, each fed by personalized algorithms, while at the same time nudging us toward uniformity by narrowing what counts as knowledge. Either path is perilous. ... Thirdly, we must insist on transparency in AI as a condition of trust. Hidden systems corrode confidence. Even now, algorithms make choices that affect credit, hiring, parole and healthcare with little visibility into how those judgments are reached. As machine cognition seeps into every facet of life, opacity will only deepen the gulf between those who wield the systems and those who live with their consequences.


With AI Agents, 'Memory' Raises Policy and Privacy Questions

If deciding what an agent remembers is one problem, deciding how much authority you have over those memories is another. Here, three issues stand out: portability, accuracy, and retention. Each raises a different set of policy challenges, all tied to a deeper question: do you own your digital memories, or merely rent them? Imagine trying to leave your job but discovering your executive assistant is contractually barred from joining you. That’s the risk if memories cannot move with you from one AI platform to another. A simple copy-and-paste transfer would seem like common sense, but companies may resist: they can argue that the insights their systems have drawn from your data are proprietary, or that moving memories introduces security concerns. The reality is that restricting portability creates enormous “switching costs.” If moving to a rival agent means starting from scratch, users will be effectively locked in — a dynamic antitrust lawyers would recognize as a modern twist on classic market power. The fight over portability is therefore not only about convenience, but also about competition. A second issue is whether you can edit what your agent knows about you. Some changes may feel trivial: swapping your listed hometown, adjusting a phone number, maybe even knocking a few years off your official age. But once agents become conduits to doctors, insurers, banks, or government services, accuracy takes on legal weight. 


The Under-Appreciated Sustainability Benefits of Colocation Data Centers

On balance, colocation data centers don’t always come out ahead on the sustainability front. Not all colocation facilities offer more efficient cooling and water management solutions. Some are just as inefficient in these areas as the typical private data center. Nor is there a guarantee that all colocation facilities will provide access to renewable energy, certainly not on a continuous basis.  For this reason, businesses that can afford to invest in sustainability-friendly solutions inside their own data centers may find this a better approach than using colocation to improve sustainability. It’s important as well to consider the total cost. ... It’s worth noting as well that public cloud data centers are also usually more sustainable than private data centers. This is due mainly to the massive economies of scale of these facilities (they’re called “hyperscale data centers” for a reason), combined with the high utilization rates they achieve by renting IT infrastructure to tens of thousands of different customers using an IaaS model. That said, for companies focused on sustainability, there are reasons to prefer colocation facilities over the public cloud. Beyond the perhaps obvious fact that colocation offers much more control (because it allows businesses to deploy and manage their own servers, rather than renting servers from a public cloud), it’s also easier to track efficiency metrics inside a colocation facility.


Cyber risk quantification helps CISOs secure executive support

We often see with customers this frequent mistake: investing in security tools just for compliance, but not configuring or using them properly. This can give a false sense of security. Testing means making sure tools are not only in place but also set up and maintained to protect your business. Our best advice is to focus on investing in experienced professionals rather than relying solely on tools. While technology, including AI, continues to evolve, it cannot yet replace the expertise and judgment of seasoned cybersecurity engineers. Skilled people remain the foundation of a strong and resilient cybersecurity strategy. ... By translating cyber risks into financial terms, CISOs can help the board understand the impact. Instead of using broad categories like low, medium, or high, it’s more persuasive to show the potential financial exposures. When risks are presented in financial terms, CISOs can demonstrate how specific projects or investments make a difference. For instance, showing that a $100,000 investment in cybersecurity could lower ransomware risk exposure from $5 million to $1 million creates a compelling return on investment. This approach makes budget approval much more likely. With Cyber Risk Quantification, we can also benchmark a company with their peers on the market, which is also an argument toward the board and other executives.


Making your code base better will make your code coverage worse

You do zero financial assessment of what happens if one particular feature fails. You are treating every file as if they have equal value and each one must meet 80% percent code coverage. The file that encrypts user data? 80% code coverage required. The file that allows a user to upload their profile image? 80% code coverage required. ... There’s a lot of sub-optimal code out there. Your automated tests can only be as good as the code that it is used to validate. Your whole test strategy is only worth as much as the features that they validate. Adding a code coverage tool to a sub-optimal code base with the hopes that it will magically improve the quality of your application will not work. It’s also likely to make it a lot harder for your development team to make it better. ... As a code base evolves, opportunities present themselves to make improvements. One of the most common practices is to consolidate repeated code. Your code base might have one or more blocks of code that gets copied and pasted elsewhere. Having identical code in multiple places is generally regarded as bad practice, it makes sense to move that repeatedly used block into a single location. That shared code might still be in the same file or moved into a separate one. This is the principle of Don’t Repeat Yourself (DRY), as opposed to Write Every Time (WET) code. Making your code DRYer is generally accepted as a good thing. Yet this comes at the cost of declining code coverage. Here are some hypothetical numbers.


Build Smart, Test Often: A Developer's Guide to AI Readiness

AI innovation is uncertain. New models, protocols and approaches to AI continue to emerge, making it difficult for organizations to adapt and keep pace. While jumping in blindly risks failure, waiting too long risks losing a competitive edge. ... Another hazard: Governance gaps create hidden risks. Without clear access controls and monitoring, AI systems can expose sensitive data or violate compliance requirements. That's why 79% of CIOs say they need strict governance in place to succeed with AI. However, as AI architectures grow more complex, they become harder to govern, limiting control and increasing enterprise risk. ... Many organizations silo experimentation and data control as if they are unrelated, but in reality, they reinforce each other. With data fragmentization, experiments can produce misleading results. Without a safe environment for experimentation, teams hesitate to deploy AI projects, even when they have the technical ability. ... The path to AI success is all about building the right foundation that enables both confident experimentation and rapid scaling. This starts with simplifying data architectures. Indeed, all surveyed enterprises are consolidating their AI tech stacks because fragmented systems create barriers to effective AI deployment. Teams with these data practices move faster because they can iterate without fear of catastrophic failures.


AI Quantisation: Reducing the cost of AI computation across the board

According to McKinsey’s research on AI empowerment organisations can achieve their complete AI value only when functional teams access powerful yet practical tools and models. Quantisation functions as a key method to connect the available solutions to practical applications. ... Quantisation has established itself as a powerful solution to tackle this problem. The process of quantisation simplifies AI model calculations by decreasing their numerical precision. Most models rely on 32-bit floating point numbers for accuracy, yet they rarely need such high precision for effective performance. Quantisation achieves memory reduction and computational load reduction by converting numbers into 8-bit or 4-bit formats. ... Quantisation-Aware Training involves adding precision limitations to the model’s training process. The training method produces models that maintain stable performance while being highly optimised , which makes it suitable for healthcare and automotive industries that require strict accuracy standards. Generalised Post-Training Quantisation or GPTQ has gained special importance when working with large language models. Through GPTQ methods, organisations can reduce their models to extremely low precision levels without compromising their advanced text understanding and generation capabilities. 


How AI-driven automation is the key to unlocking your operational resilience

From IT outages to global crises, modern organizations increasingly require rapid, reliable response and recovery capabilities to keep their digital operations running and their people safe. To a great degree, ever-higher expectations for operational resilience must be met by significant advances in automation -- which leading solution providers are making possible by AI and machine learning. ... The real key? Automation. Sean stressed that true resilience comes from automating IT incident responses alongside business continuity plans. Without integration and automation, manual processes are the enemy of efficiency and cost-effectiveness. This raised the natural question: If automation is the end (the "what"), what are the means (the "how")? ... Sean explained how advanced technologies like AI and machine learning are essential engines driving this automation. xMatters offers native process automation in a single platform, handling everything from issue detection to resolution. ... With AI agents on the horizon, Sean painted a future where incident commanders oversee rather than micromanage, as bots handle troubleshooting, communication, and mitigation. This is the next evolution in making resilience a superpower for complex IT environments, complete with no-code flow designers and AIOps for proactive threat hunting.

Daily Tech Digest - September 29, 2025


Quote for the day:

"Remember that stress doesn't come from what is going on in your life. It comes from your thoughts on what is going on in your life." -- Andrew Bernstein



Agentic AI in IT security: Where expectations meet reality

The first decision regarding AI agents is whether to layer them onto existing platforms or to implement standalone frameworks. The add-on model treats agents as extensions to security information and event management (SIEM), security orchestration, automation and response (SOAR), or other security tools, providing quick wins with minimal disruption. Standalone frameworks, by contrast, act as independent orchestration layers, offering more flexibility but also requiring heavier governance, integration, and change management. ... Agentic AI adoption rarely happens overnight. As Checkpoint’s Weigman puts it, “Most security teams aren’t swapping out their whole SOC for some shiny new AI system, and one can understand that: It’s expensive, and it demands time and human effort, which at the end of the day could appear be too disruptive and costly.” Instead, leaders look for ways to incrementally layer new capabilities without jeopardizing ongoing operations, which makes pilots a common first step. ... “An agent designed to carry out a sequence of actions in response to a threat could inadvertently create new risks if misused or deployed inappropriately,” says Goje. “For instance, there’s potential for unregulated scripts or newly discovered vulnerabilities.” ... “Pricing remains a friction point,” says Fifthelement.ai’s Garini. “Vendors are playing with usage-based models, but organizations are finding value when they tie spend to analyst hours saved rather than raw compute or API calls.”


Anthropic, surveillance and the next frontier of AI privacy

Democratic legal systems are built on due process: Law enforcement must have grounds to investigate. Surveillance is meant to be targeted, not generalized. Allowing AI to conduct mass, speculative profiling would invert that principle, treating everyone as a potential suspect and granting AI the power to decide who deserves scrutiny. By saying “no” to this use case, Anthropic has drawn a red line. It is asserting that there are domains where the risk of harm to civil liberties outweighs the potential utility. ... How much should technology companies be able to control how their products are used, particularly once they are sold into government? Better yet, do they have a responsibility to ensure their products are used as intended? There is no easy answer. Enforcement of “terms of service” in highly sensitive contexts is notoriously difficult. A government agency may purchase access to an AI model and then apply it in ways that the provider cannot see or audit. ... The real challenge ahead is to establish publicly accountable frameworks that balance security needs with fundamental rights. Surveillance powered by AI will be more powerful, more scalable and more invisible than anything that came before. It has enormous potential when it comes to national security use cases. Yet without clear limits, it threatens to normalize perpetual, automated suspicion.


How attackers poison AI tools and defenses

AI systems that act with a high degree of autonomy carry another risk: impersonating users or trusting impostors. One tactic is known as a “Confused Deputy” attack. Here, an AI agent with high privileges performs a task on behalf of a low-privileged attacker. Another involves spoofed API access, where attackers trick integrations with services like Microsoft 365 or Gmail into leaking information or sending fraudulent emails. ... One crucial step is to make filters aware of how LLMs generate content, so they can flag anomalies in tone, behavior or intent that might slip past older systems. Another is to validate what AI systems remember over time. Without that check, poisoned data can linger in memory and influence future decisions. Isolation also matters. AI assistants should run in contained environments where unverified actions are blocked before they can cause damage. Identity management needs to follow the principle of least privilege, giving AI integrations only the access they require. Finally, treat every instruction with skepticism. Even routine requests must be verified before execution if zero-trust principles are to hold. ... The next wave of threats will involve agentic AI-powered systems that reason, plan and act on their own. While these tools can deliver tremendous productivity gains to users, their autonomy makes them attractive targets. If attackers succeed in steering an agent, the system could make decisions, launch actions or move data undetected.


‘AI and ML the main focus in tech right now’

AI and machine learning are undoubtedly the main focuses in technology right now, with mentions everywhere. A great way to upskill in this area is by attending talks and seminars, which are frequently held and provide valuable insights into how these technologies are being applied in the industry. These events also help you stay up to date on the latest developments. If you have a strong interest in the field, taking an online course, even a free one, can be a great way to grasp the fundamentals, learn the terminology, and understand how to effectively apply these technologies in your current role. Cloud technology is another area that’s here to stay. It’s widely adopted and incredibly versatile. Cloud certifications are highly accessible, with plenty of resources available to help you prepare for the exams and follow the learning paths they offer. ... Being a people person is incredibly beneficial in this field. A significant part of the job involves communication – whether it’s sharing ideas or networking with coworkers in your area. Building these connections can greatly enhance your ability to perform and succeed in your role. Problem-solving is another key aspect of software engineering, and it’s something I’ve always enjoyed. While it can be particularly challenging at times, the sense of accomplishment and reward when your efforts pay off is unmatched.


Better Data Beats Better Models: The Case for Data Quality in ML

Data quality is a broad and abstract concept, but it becomes more measurable when we break it down into different dimensions. Accuracy is the most important and obvious one: If the input data is wrong (e.g., mislabeled transactions in fraud detection models), the model will simply learn incorrect patterns. Completeness is equally important. Without a high degree of coverage for important features, the model will lack context and produce weaker predictions. For example, a recommender system missing key user attributes will fail to provide personalized recommendations. Freshness plays a subtle but powerful role in data quality. Outdated data appears correct, but does not reflect real-world conditions. ... Detecting data quality issues is not just about a single check but rather about continuous monitoring. Statistical distribution checks are the first line of defense, helping detect anomalies or sudden shifts that can indicate broken data pipelines. ... Ignoring data quality can often turn out to be very expensive. Teams spend large amounts of compute to retrain models on flawed data, to observe little to no business impact. Launch timelines get pushed back since teams spend weeks debugging data issues, a time that could have been spent otherwise on feature development. In industries that are regulated, like finance and healthcare, poor data quality can cause compliance violations and increased legal expenses.


DORA 2025: Faster, But Are We Any Better?

The newest DORA report — the “State of AI-Assisted Software Development” — lands at a time when AI is eating everything from code generation to documentation to operations. And just like those early DORA reports reframed speed versus stability, this one is reframing what AI is actually doing to our software delivery pipelines. Spoiler alert: It’s not as simple as “AI makes everything better.” ... Now here’s the counterintuitive part. For the first time, DORA shows AI adoption is linked to higher throughput. That’s right — teams using AI are moving work through the system faster than those who aren’t. But before you pop the champagne, look at the other half of the finding: Instability is still higher in AI-heavy teams. Faster, yes. Safer? Not so much. If you’ve been around the block, this won’t shock you. We saw the same thing in the early days of automation — speed without discipline just meant you hit the wall quicker. ... Another gem buried in the report is the role of value stream management. AI tends to deliver “local optimizations” — an engineer codes faster, a test suite runs quicker — but without VSM, those wins don’t always roll up into business outcomes. With VSM in place, AI-driven productivity gains translate into measurable improvements at the team and product level. That, to me, is vintage DORA. Remember when they proved that culture — psychological safety, autonomy, collaboration — wasn’t just a warm fuzzy HR concept but directly correlated with elite performance? Same here. VSM turns AI from a toy into a force multiplier.


The 5 Technology Trends For 2026 Everyone Must Prepare For Now

In recent years, we've seen industry, governments, education and everyday folk scrambling to adapt to the disruptive impact of AI. But by 2026, we're starting to get answers to some of the big questions around its effect on jobs, business and day-to-day life. Now, the focus shifts from simply reacting to reinventing and reshaping in order to find our place in this brave, different and sometimes frightening new world. ... In tech, agents were undoubtedly the hot buzzword of 2025, representing a meaningful evolution over previous AI applications like chatbots and generative AI. Rather than simply answering questions and generating content, agents take action on our behalf, and in 2026, this will become an increasingly frequent and normal occurrence in everyday life. From automating business decision-making to managing and coordinating hectic family schedules, AI agents will handle the “busy work” involved in planning and problem-solving, freeing us up to focus on the big picture or simply slowing down and enjoying life. ... Quantum computing harnesses the strange and seemingly counterintuitive behavior of particles at the sub-atomic level to accomplish many complex computing tasks millions of times faster than "classic" computers. For the last decade, there's been excitement and hype over their performance in labs and research environments, but in 2026, we are likely to see further adoption in the real world. 


GreenOps and FinOps: Strategic Convergence in the Cloud Transformation Journey

FinOps, short for “Financial Operations,” is a cultural practice designed to bring financial accountability to the cloud. It blends engineering, finance, and business teams to manage cloud costs collaboratively and transparently. The goal is clear: maximize business value from the cloud by making spending decisions grounded in data and aligned with business objectives. ... GreenOps, on the other hand, is all about sustainability in cloud operations. It’s a discipline that encourages organizations to monitor, manage, and minimize the environmental footprint of their cloud usage. GreenOps revolves around using renewable energy-powered cloud resources, recycling or reusing digital assets, optimizing workloads, and selecting eco-friendly services, all with the aim of reducing carbon emissions and supporting broader sustainability goals. ... In practical terms, GreenOps activities such as deleting unused storage volumes, rightsizing virtual machines, and consolidating workloads not only shrink the carbon footprint but also slash monthly cloud bills. Thus, sustainability efforts act as “passive” cost optimizers—delivering FinOps benefits without explicit financial tracking. ... FinOps and GreenOps aren’t one-off projects but ongoing practices. Regular reviews, “cost and sustainability audits,” and optimization sprints keep teams focused. 


Rethinking AI’s Role in Mental Health with GPT-5

GPT-5 has surfaced critical questions in the AI mental health community: What happens when people treat a general purpose chatbot as a source of care? How should companies be held accountable for the emotional effects of design decisions? What responsibilities do we bear, as a health care ecosystem, in ensuring these tools are developed with clinical guardrails in place? ... OpenAI has since taken steps to restore user confidence by making its personality “warmer and friendlier,” and encouraging breaks during extended sessions. However, it doesn’t change the fact that ChatGPT was built for engagement, not clinical safety. The interface may feel approachable, especially appealing to those looking to process feelings around high-stigma topics – from intrusive thoughts to identity struggles – but without thoughtful design, that comfort can quickly become a trap. ... Designing for engagement alone won’t get us there, and we must design for outcomes rooted in long-term wellbeing. At the same time, we should broaden our scope to include AI systems that shape the care experience, such as reducing the administrative burden on clinicians by streamlining billing, reimbursement, and other time-intensive tasks that contribute to burnout. Achieving this requires a more collaborative infrastructure to help shape what that looks like, and co-create technology with shared expertise from all corners of the industry including AI ethicists, clinicians, engineers, researchers, policymakers and users themselves.


Cybersecurity skills shortage: can upskilling close the talent gap?

According to reports, the global cybersecurity workforce gap exceeded 4 million professionals in 2023, with India alone requiring more than 500,000 skilled experts to meet current demand. This shortage is not merely a hiring challenge; it is a business risk. ... The traditional answer to talent shortages has been to hire more people. But in cybersecurity, where demand far outstrips supply, hiring alone cannot solve the problem. Upskilling training existing employees to meet evolving requirements offers a sustainable solution. Upskilling is not about starting from scratch. It leverages existing talent pools, such as IT administrators, network engineers, or even software developers, and equips them with cybersecurity expertise. ... While technology plays a central role in cybersecurity, the human factor remains the ultimate line of defense. Many high-profile breaches stem not from technical weaknesses but from human errors such as phishing clicks or misconfigured systems. Upskilling programs must therefore go beyond technical mastery to also emphasise behavioral awareness, ethical responsibility, and decision-making under pressure. ... The cybersecurity talent gap is unlikely to vanish overnight. However, the organisations that will thrive are those that view the challenge not as a bottleneck but as an opportunity to reimagine workforce development. Upskilling is the most pragmatic path forward, enabling companies to build resilience, retain talent, and remain competitive in an era of escalating cyber risks.

Daily Tech Digest - September 28, 2025


Quote for the day:

“Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through.” -- Jarod Kintz


What happens when AI becomes the customer?

If the first point of contact is no longer a person but an AI agent, then traditional tactics like branding, visual merchandising or website design will have reduced impact. Instead, the focus will move to how easily machines can find and understand product information. Retailers will need to ensure that data, from specifications and availability to pricing and reviews, is accurate, structured and optimised for AI discovery. Products will no longer be browsed by humans but scanned and filtered by autonomous systems making selections on someone else’s behalf. ... This trend is particularly strong among younger and higher-income consumers. People under 35 are far more likely to use AI throughout the buying process, particularly for everyday items like groceries, toiletries and clothes. For this group, convenience matters. Many are comfortable letting technology take over simple tasks, and when it comes to low cost, low risk products, they’re happy for AI to handle the entire purchase. ... These developments point to the rise of the agentic internet – a world in which AI agents become the main way consumers interact with brands. As these tools search, compare, buy and manage products on users’ behalf, they will reshape how visibility, loyalty and influence work. Retailers have less than five years to respond. That means investing in clean, structured product data, adapting automation where it’s welcomed, and keeping the human touch where trust matters. 


The overlooked cyber risk in data centre cooling systems

Data centre operations are critically dependent on a complex ecosystem of OT equipment, including HVAC and building management systems. As operators adopt closed-loop and waterless cooling to improve efficiency, these systems are increasingly tied into BMS and DCIM platforms. This expands the attack surface of networks that were once more segmented. A compromise of these systems could directly affect temperature, humidity or airflow, with clear implications for the availability of services that critical infrastructure asset owners rely on. ... Resilience also depends on secure remote access, including multi-factor authentication and controlled jump-host environments for vendors and third parties. Finally, risk-based vulnerability management ensures that critical assets are either patched, mitigated, or closely monitored for exploitation, even where systems cannot easily be taken offline. Taken together, these controls provide a framework for protecting data centre cooling and building systems without slowing the drive for efficiency and innovation. ... As the UK expands its data centre capacity to fuel AI ambitions and digital transformation, cybersecurity must be designed into the physical systems that keep those facilities stable. Cooling is not just an operational detail. It is a potential target — and protecting it is essential to ensuring the sector’s growth is sustainable, resilient, and secure.


Rethinking Regression Testing with Change-to-Test Mapping

Regression testing is essential to software quality, but in enterprise projects it often becomes a bottleneck. Full regression suites may run for hours, delaying feedback and slowing delivery. The problem is sharper in agile and DevOps, where teams must release updates daily. ... The need for smarter regression strategies is more urgent than ever. Modern software systems are no longer monoliths; they are built from microservices, APIs, and distributed components, each evolving quickly. Every code change can ripple across modules, making full regressions increasingly impractical. At the same time, CI/CD costs are rising sharply. Cloud pipelines scale easily but generate massive bills when regression packs run repeatedly. ... The core idea is simple: “If only part of the code changes, why not run only the tests covering that part?” Change-to-test mapping links modified code to the relevant tests. Instead of running the entire suite on every commit, the approach executes a targeted subset – while retaining safeguards such as safety tests and fallback runs. What makes this approach pragmatic is that it does not rely on building a “perfect” model of the system. Instead, it uses lightweight signals – such as file changes, annotations, or coverage data – to approximate the most relevant set of tests. Combined with guardrails, this creates a balance: fast enough to keep up with modern delivery, yet safe enough to trust in production-grade environments.


Is A Human Touch Needed When Compliance Has Automation?

Even with technical issues, automation may highlight missing patches, but humans are the ones who must prioritize fixes, coordinate remediation, and validate that vulnerabilities are closed. Audits highlight this divide even more clearly. Regulators rarely accept a data dump without explanation. Compliance officers must be able to explain how controls work, why exceptions exist, and what is being done to address them. Without human review, automated alerts risk creating false positives, blind spots, or alert fatigue. Perhaps most critically, over-dependence on automation can erode institutional knowledge, leaving teams unprepared to interpret risk independently. ... By eliminating repetitive evidence collection, teams gain the capacity to analyze training effectiveness, scenario-plan future threats, and interpret regulatory changes. Automation becomes not a replacement for people, but a multiplier of their impact. ... Over-reliance on automation carries its own risks. A clean dashboard may mask legacy systems still in production or system blind spots if a monitoring tool goes down. Without active oversight, teams may not discover gaps until the next audit. There’s also the danger of compliance becoming a “black box,” where staff interact with dashboards but never learn how to evaluate risk themselves. CIOs need to actively design against these vulnerabilities.


14 Challenges (And Solutions) Of Filling Fractional Leadership Roles

Filling a fractional leadership role is tough when companies underestimate the expertise required to thrive in such a role. Fractional leaders need both autonomy and seamless integration with key stakeholders. ... One challenge of fractional leadership is grasping the company culture and processes with limited time on site. Without that context, even the most skilled leader can struggle to drive meaningful change or build credibility. ... Finding the right culture fit for a fractional leadership role can be challenging. High-performing leadership teams are tight-knit ecosystems, and a fractional leader’s challenges with breaking into them and fitting into their culture can be daunting. ... One challenge is unrealistic expectations—wanting full-time availability at part-time cost. The key is to define scope, decision rights and deliverables upfront. Treat fractional leaders as strategic partners, not stopgaps. Clear onboarding and aligned incentives are essential to driving value and trust. ... A common hurdle with fractional roles is misaligned expectations—impact is needed fast, but boundaries and authority aren’t always defined. The fix? Be upfront: outline goals, decision-making limits and integration plans early so leaders can add value quickly without friction.


Will the EU Designate AI Under the Digital Markets Act?

There are two main ways in which the DMA will be relevant for generative AI services. First, a generative AI player may offer a core platform service and meet the gatekeeper requirements of the DMA. Second, generative AI-powered functionalities may be integrated or embedded in existing designated core platform services and therefore be covered by the DMA obligations. Those obligations apply in principle to the entire core platform service as designated, including features that rely on generative AI. ... Cloud computing is already listed as a core platform service under the DMA, and thus, designating cloud services would be a much faster process than creating a new core platform service category. Michelle Nie, a tech policy researcher formerly with the Open Markets Institute, says the EU should designate cloud providers to tackle the infrastructural advantages held by gatekeepers. Indeed, she has previously written for Tech Policy Press that doing so “would help address several competitive concerns like self-preferencing, using data from businesses that rely on the cloud to compete against them, or disproportionate conditions for termination of services.” ... Introducing contestability and fairness, the stated goals of the DMA, into digital ecosystems increasingly relied on by private and public institutions could not be more critical. 


The Looming Authorization Crisis: Why Traditional IAM Fails Agentic AI

From copilots booking travel to intelligent agents updating systems and coordinating with other bots, we’re stepping into a world where software can reason, plan, and operate with increasing autonomy.This shift brings immense promise and significant risk. The identity and access management (IAM) infrastructures that we rely upon today were built for people and fixed service accounts. They weren’t designed to manage self-directing, dynamic digital agents. And yet that’s what Agentic AI demands. ... The road to a comprehensive and internationally accessible Agentic AI IAM framework is a daunting task. The rapid pace of AI development demands accelerated IAM security guidance, especially for heavily regulated sectors. Continued research, continued development of standards, and rigorous interoperability are required to prevent fragmentation into incompatible identity silos. We must also address the ethical issues, such as bias detection and mitigation in credentials, and offer transparency and explainability of IAM decisions. ... The stakes are high. Without a comprehensive plan for managing these agents—one that tracks who they are, what they can perceive, and when their permissions expire—we risk disaster through way of complexity and compromise. Identity remains the foundation of enterprise security, and its scope must reach rapidly to shield the autonomous revolution.


How immutability tamed the Wild West

One of the first lessons that a new programmer should learn is that global variables are a crime against all that is good and just. If a variable is passed around like a football, and its state can change anywhere along the way, then its state will change along the way. Naturally, this leads to hair pulling and frustration. Global variables create coupling, and deep and broad coupling is the true crime against the profession. At first, immutability seems kind of crazy—why eliminate variables? Of course things need to change! How the heck am I going to keep track of the number of items sold or the running total of an order if I can’t change anything? ... The key to immutability is understanding the notion of a pure function. A pure function is one that always returns the same output for a given input. Pure functions are said to be deterministic, in that the output is 100% predictable based on the input. In simpler terms, a pure function is a function with no side effects. It will never change something behind your back. ... Immutability doesn’t mean nothing changes; it means values never change once created. You still “change” by rebinding a name to a new value. The notion of a “before” and “after” state is critical if you want features like undo, audit tracing, and other things that require a complete history of state. Back in the day, GOSUB was a mind-expanding concept. It seems so quaint today. 


What Lessons Can We Learn from the Internet for AI/ML Evolution?

One of the defining principles of the Internet was to keep the core simple and push the intelligence to the edge. The network and its host computers just simply delivered packets reliably without dictating or controlling applications. That principle enabled the explosion of the Web, streaming, and countless other services. In AI, similar principles should be considered. Instead of centralizing everything in “one foundational model”, we should empower distributed agents and edge intelligence. Core infrastructure should stay simple and robust, enabling diverse use cases on top. ... One of the most important lessons of all from the Internet is that there be no single company nor government-owned or controlled TCP/IP stack. It is neutral governance that created global trust and adoption. Institutions such as ICANN, and the regional Internet registries (RIRs) played a key role by managing domain names and IP address assignments in an open and transparent way, ensuring that resources were allocated fairly across geographies. This kind of neutral stewardship allowed the Internet to remain interoperable and borderless. On the other hand, today’s AI landscape is controlled by a handful of big-tech companies. To scale AI responsibly, we will need similar global governance structures—an “IETF for AI,” complemented by neutral registries that can manage shared resources such as model identifiers, agent IDs, coordinating protocols, among others.


Digital Transformation: Investments Soar, But Cyber Risks (Often) Outpace Controls

With the accelerating digital transformation, periodic security and compliance reviews are obsolete. Nelson emphasizes the need for “continuous assessment—continuous monitoring of privacy, regulatory, and security controls,” with automation used wherever feasible. Third-party and supply-chain risk must be continuously monitored, not just during vendor onboarding. Similarly, asset management can no longer be neglected, as even overlooked legacy devices—like unpatched Windows XP machines in manufacturing—can serve as vectors for persistent threats. Effective governance is crucial to enhancing security during periods of rapid digital transformation, Nelson emphasized. By establishing robust frameworks and clear policies for acceptable use, organizations can ensure that new technologies, such as AI, are adopted responsibly and securely. ... Maintaining cybersecurity within Governance, Risk, and Compliance (GRC) programs helps keep security from being a reactive cost center, as security measures are woven into the digital strategy from the outset, rather than being retrofitted. And GRC frameworks provide real-time visibility into organizational risks, facilitate data-driven decision-making, and create a culture where risk awareness coexists with innovation. This harmony between governance and digital initiatives helps businesses navigate the digital landscape while ensuring their operations remain secure, compliant, and prepared to adapt to change.

Daily Tech Digest - September 27, 2025


Quote for the day:

"The starting point of all achievement is desire." -- Napolean Hill


Senate Bill Seeks Privacy Protection for Brain Wave Data

The senators contend that a growing number of consumer wearables and devices "are quietly harvesting sensitive brain-related data with virtually no oversight and no limits on how it can be used." Neural data, such as brain waves or signals from neural implants can potentially reveal thoughts, emotions or decisions-making patterns that could be collected and used by third parties, such as data brokers, to manipulate consumers and even potentially threaten national security, the senators said. ... Colorado defines neural data "as information that is generated by the measurement of the activity of an individual's central or peripheral nervous systems and that can be processed by or with the assistance of a device,'" Rose said. Neural data is a subcategory of "biological data," which Colorado defines as "data generated by the technological processing, measurement, or analysis of an individual's biological, genetic, biochemical, physiological, or neural properties, compositions, or activities or of an individual's body or bodily functions, which data is used or intended to be used, singly or in combination with other personal data, for identification purposes," she said. ... Neuralink is currently in clinical trials for an implantable, wireless brain device designed to interpret a person's neural activity. The device is designed to help patients operate a computer or smartphone "by simply intending to move - no wires or physical movement are required." 


The hidden cyber risks of deploying generative AI

Unfortunately, organizations aren’t thinking enough about security. The World Economic Forum (WEF) reports that 66% of organizations believe AI will significantly affect cybersecurity in the next 12 months, but only 37% have processes in place to assess AI security before deployment. Smaller businesses are even more exposed—69% lack safeguards for secure AI deployment, such as monitoring training data or inventorying AI assets. Accenture finds similar gaps: 77% of organizations lack foundational data and AI security practices, and only 20% express confidence in their ability to secure generative AI models. ... Both WEF and Accenture emphasize that the organizations best prepared for the AI era are those with integrated strategies and strong cybersecurity capabilities. Accenture’s research shows that only 10% of companies have reached what it calls the “Reinvention-Ready Zone,” which combines mature cyber strategies with integrated monitoring, detection and response capabilities. Firms in that category are 69% less likely to experience AI-powered cyberattacks than less prepared organizations. ... For enterprises, the path forward is about balancing ambition with caution. AI can boost efficiency, creativity and competitiveness, but only if deployed responsibly. Organizations should make AI security a board-level priority, establish clear governance frameworks, and ensure their cybersecurity teams are trained to address emerging AI-driven threats.


7 hard-earned lessons of bad IT manager hires

Hiring IT managers is difficult. You are looking for a unicorn-like set of skills: the technical acuity to understand projects and guide engineers, the people skills to do so without ruffling feathers, and a leadership mindset that can build a team and take it in the right direction. Hiring for any tech role can be fraught with peril — with IT managers it’s even more so. One recent study found that 87% of technology leaders are struggling to find talent that has the skills they need. And when they do find that rare breed, it’s often not as perfect as it first seemed. Deloitte’s 2025 Global Human Capital Trends survey found that, for two-thirds of managers and executives, recent hires did not have what was needed. Given this landscape, you’re bound to make mistakes. But you don’t have to make all of them yourself. You can learn from what others have experienced and go into this effort with hard-won experience — even if it isn’t your own. ... Managing that many people is crushing. “It’s hard to keep track of what they’re all working on or how to set them up for success,” Mishra says. “I saw signs of dysfunction. People felt directionless and were getting blocked. Some brilliant engineers were taking on manager tasks because I was in back-to-back meetings and firefighting all the time. Productivity lowered because my top performers were doing things not natural to them.”


When Your CEO’s Leadership Creates Chaos

By speaking her CEO’s language, she shifted from being perceived as obstructive to being seen as a trusted advisor. Leaders are far more receptive when ideas connect directly to their stated priorities. Test every message against your CEO’s core priorities, growth, clients, investors, or whatever drives them. Reinforce your case with external validation such as market data, board expectations, or customer benchmarks. ... Fast-moving CEOs often create organizational whiplash by revisiting decisions or overruling execution midstream. Ambiguity fuels frustration. The antidote is building explicit agreements, which reduces micromanagement while preserving momentum. ... To avoid overlap and blind spots, the group divided responsibilities into distinct categories: customer acquisition, customer retention, and operational efficiency. Together, they then presented a unified, comprehensive strategy to the CEO. This not only made their recommendations harder to dismiss but also replaced a sense of isolation with coordinated leadership. Informal dinners, side meetings, and peer check-ins strengthened the coalition and amplified their collective voice. ... At the offsite, Alex connected her weekly progress updates to a broader organizational direction-setting check-in: revisiting the vision, identifying big moves, reallocating resources, and choosing one operating principle to shift. This kept her updates both visible and tied to strategy. 


From outdated IT to smart modern workplaces: how to do that?

Many organizations still run critical systems on-premises, while at the same time wanting to use cloud applications. As a result, traditional management with domains and Group Policy Objects (GPOs) is slowly disappearing. Microsoft Intune offers an alternative, but in practice, it is less streamlined. “What you used to manage centrally with GPOs now has to be set up in different places in Intune,” explains Van Wingerden. ... A hybrid model inevitably involves more complex budgeting. Costs for virtual machines, storage, or licenses only become apparent over time, which means financial surprises are lurking. Technical factors also play a role. Some applications perform better locally due to latency or regulations, while others benefit from cloud scalability. The result? ... The traditional closed workplace no longer suffices in this new landscape. Zero Trust is becoming the starting point, with dynamic verification per user and context. “We can say: based on the user’s context, we make things possible or impossible within that Windows workplace,” says Van Wingerden. Think of applications that run locally at the office but are available as remote apps when working from home. This creates a balance between ease of use and security. This context-sensitive approach is sorely needed. Cybercriminals are increasingly targeting endpoints and user accounts, where traditional perimeters fall short. 


Cisco Firewall Zero-Days Exploited in China-Linked ArcaneDoor Attacks

“Attackers were observed to have exploited multiple zero-day vulnerabilities and employed advanced evasion techniques such as disabling logging, intercepting CLI commands, and intentionally crashing devices to prevent diagnostic analysis,” Cisco explains. While it has yet to be confirmed by the wider cybersecurity community, there is some evidence suggesting that the hackers behind the ArcaneDoor campaign are based in China. ... Users are advised to update their devices as soon as possible, as the fixed release will automatically check the ROM and remove the attackers’ persistence mechanism. Users are also advised to rotate all passwords, certificates, and keys following the update. “In cases of suspected or confirmed compromise on any Cisco firewall device, all configuration elements of the device should be considered untrusted,” Cisco notes. The company also released a detection guide to help organizations hunt for potential compromise associated with the ArcaneDoor campaign. ... An attacker could exploit this vulnerability by sending crafted HTTP requests to a targeted web service on an affected device after obtaining additional information about the system, overcoming exploit mitigations, or both. A successful exploit could allow the attacker to execute arbitrary code as root, which may lead to the complete compromise of the affected device,” the company notes.


5 ways you can maximize AI's big impact in software development

Tony Phillips, engineering lead for DevOps services at Lloyds Banking Group, said his firm is running a program called Platform 3.0, which aims to modernize infrastructure and lay the groundwork for adopting AI. He said the next step is to move beyond using AI to assist with coding and to boost all areas of the development process. "We are creating productivity boosts in our developer community, but we are now looking at how we take that forward across the rest of the pipeline for what we ship." ... He said the bank's initial explorations into AI suggest that learning from experiences is an important best practice. "There's always a balance, because you've got to let people get hold of the technology, put it in their context of what they're doing, and then understand what good looks like," he said. "Then you've got to build the capacity for what gets fed back so that you can respond quickly." ... Like others, Terry said governance is crucial. Give developers feedback when they take non-compliant actions -- and AI might help with this process. "We have a lot of different platforms and maybe haven't created a dotted line between all the platforms," he said. "AI might be the opportunity to do that and give developers the chance to do the right thing from the beginning." ... Terry also referred to the rise of vibe coding and suggested it shouldn't be used by people who have just begun coding in an enterprise setting.


Ethical cybersecurity practice reshapes enterprise security in 2025

The tension between innovation and risk management represents an important challenge for modern organisations. Push too hard for innovation without adequate safeguards and companies risk data breaches and compliance violations. Focus too heavily on risk mitigation, and organisations may find themselves unable to compete in evolving markets. ... The ethical AI component emphasises explainability. Rather than generating “black box” alerts, ManageEngine’s systems explain their reasoning. An alert might read: “The endpoint cannot log in at this time and is trying to connect to too many network devices.” ... The balance between necessary security monitoring and privacy invasion represents one of the most delicate aspects of ethical cybersecurity practices. Raymond acknowledges that while proactive monitoring is essential for detecting threats early, over-monitoring risks creating a surveillance environment that treats employees as suspects rather than trusted partners. ... For organisations seeking to integrate ethical considerations into their cybersecurity strategies, Raymond recommends three concrete steps: adopting a cybersecurity ethics charter at the board level, embedding privacy and ethics in technology decisions when selecting vendors, and operationalising ethics through comprehensive training and controls that explain not just what to do, but why it matters.


What is infrastructure as code? Automating your infrastructure builds

Infrastructure as code is a practice of writing plain-text declarative configuration files that automated tools use to manage and provision servers and other computing resources. In the pre-cloud days, sysadmins would often customize the configuration of individual on-premises server systems; but as more and more organizations move to the cloud, those skills became less relevant and useful. ... and Puppet founder Luke Kanies started to use the terminology. In a world of distributed applications, hand-tuning servers was never going to scale, and scripting had its own limitations, so being able to automate infrastructure provisioning became a core need for many first movers back in the early days of cloud. Today, that underlying infrastructure is more commonly provisioned as code, thanks to popular early tools in this space such as Chef, Puppet, SaltStack, and Ansible. ... But the neat boundaries between tools and platforms have blurred, and many enterprises no longer rely on a single IaC solution, but instead juggle multiple tools across different teams or cloud providers. For example, Terraform or OpenTofu may provision baseline resources, while Ansible handles configuration management, and Kubernetes-native frameworks like Crossplane provide a higher layer of abstraction. This “multi-IaC” reality introduces new challenges in governance, dependency management, and avoiding configuration drift.


Software Upgrade Interruptions: The New Challenge for Resilience

The growing cost of upgrade outages derives from three interwoven sources. First, increased digitization of activities means that applications entirely reliant on computational capacity are handling more of our daily activities. Second, as centrally managed cloud-based data storage and application hosting replace local storage and processing on phones, local servers, and computers, functions once susceptible to failures of a small number of locally managed steps are now subject to diverse links covering both the movement of data and operational processing. ... Third, the complexity of the software processing the data is also increasing, as more and more intricate and complicated systems interact to manage and control the relevant operations. ... From a supply chain risk management perspective, these three forces mean that risks to the resilience of operational delivery of all kinds—not just telecommunications services—have slowly and inexorably increased with the evolution of cloud computing. And arguably, these chains are at their most vulnerable when updates are made to software at any point along the chain. As there isn’t a test system mirroring the full scope of operations for these complex services to provide reassurance that nothing will go wrong, service outages from this source will inevitably both increase and impose their full costs in real time in the real world 

Daily Tech Digest - September 26, 2025


Quote for the day:

“You may be disappointed if you fail, but you are doomed if you don’t try.” -- Beverly Sills



Moving Beyond Compliance to True Resilience

Organisations that treat compliance as the finish line are missing the bigger picture. Compliance frameworks such as HIPAA, GDPR, and PCI-DSS provide critical guidelines, but they are not designed to cover the full spectrum of evolving cyber threats. Cybercriminals today use AI-driven reconnaissance, deepfake impersonations, and polymorphic phishing techniques to bypass traditional defences. Meanwhile, businesses face growing attack surfaces from hybrid work models and interconnected systems. A lack of leadership commitment, underfunded security programs, and inadequate employee training exacerbate the problem. ... Building resilience requires more than reactive policies, it calls for layered, proactive defence mechanisms such as threat intelligence, endpoint detection and response (EDR), and intrusion prevention systems (IPS). These are essential in identifying and stopping threats before they can cause damage which should be at the front line of defence. Ultimately reducing exposure and giving teams the visibility they need to act swiftly. ... True cyber resilience means moving beyond regulatory compliance to develop strategic capabilities that protect against, respond to, and recover from evolving threats. This includes implementing both offensive and defensive security layers, such as penetration testing and real-time intrusion prevention, to identify weaknesses before attackers do.


Architecture Debt vs Technical Debt: Why Companies Confuse Them and What It Costs Business

The contrast is clear: technical debt reflects inefficiencies at the system level — poorly structured code, outdated infrastructure, or quick fixes that pile up over time. Architecture debt emerges at the enterprise level — structural weaknesses across applications, data, and processes that manifest as duplication, fragmentation, and misalignment. One constrains IT efficiency; the other constrains business competitiveness. Recognizing this difference is the first step toward making the right strategic investments. ... The difference lies in visibility: technical debt is tangible for developers, showing up in unstable code, infrastructure issues, and delayed releases. Architecture debt, by contrast, hides in organizational complexity: duplicated platforms, fragmented data, and misaligned processes. When CIOs and business leaders hear the word “debt,” they often assume it refers to the same challenge. It does not. ... Recognizing this distinction is critical because it determines where investments should be made. Addressing technical debt improves efficiency within systems; addressing architecture debt strengthens the foundations of the enterprise. One enables smoother operations, while the other ensures long-term competitiveness and resilience. Leaders who fail to separate the two-risk solving local problems while leaving the structural weaknesses that undermine the organization’s future unchallenged.


Data Fitness in the Age of Emerging Privacy Regulations

Enter the concept of Data Fitness: a multidimensional measure of how well data aligns with privacy principles, business objectives, and operational resilience. Much like physical fitness, data fitness is not a one-time achievement but a continuous discipline. Data fitness is not just about having high-quality data, but also about ensuring that data is managed in a way that is compliant, secure, and aligned with business objectives. ... The emerging privacy regulations have also introduced a new layer of complexity to data management. They shift the focus from simply collecting and monetizing data to a more responsible and transparent approach, which call for sweeping review and redesign of all applications and processes that handles data. ... The days of storing customer data forever are over. New regulations often specify that personal data can only be retained for as long as it's needed for the purpose for which it was collected. This requires companies to implement robust data lifecycle management and automated deletion policies. ... Data privacy isn't just an IT or legal issue; it's a shared responsibility. Organizations must educate and train all employees on the importance of data protection and the specific policies they need to follow. A strong privacy culture can be a competitive advantage, building customer trust and loyalty. ... It's no longer just about leveraging data for profit; it's about being a responsible steward of personal information. 


Independent Management of Cloud Secrets

An independent approach to NHI management can empower DevOps teams by automating the lifecycle of secrets and identities, thus ensuring that security doesn’t compromise speed or agility. By embedding secrets management into the development pipeline, teams can preemptively address potential overlaps and misconfigurations, as highlighted in the resource on common secrets security misconfigurations. Moreover, NHIs’ automation capabilities can assist DevOps enterprises in meeting regulatory audit requirements without derailing their agile processes. This harmonious blend of compliance and agility allows for a framework that effectively bridges the gap between speed and security. ... Automation of NHI lifecycle processes not only saves time but also fortifies systems by means of stringent access control. This is critical in large-scale cloud deployments, automated renewal and revocation of secrets ensure uninterrupted and secure operations. More insightful strategies can be explored in Secrets Security Management During Development. ... While the integration of systems provides comprehensive security benefits, there is an inherent risk in over-relying on interconnected solutions. Enterprises need a balanced approach that allows for collaboration between systems without compromising individual segment vulnerabilities. A delicate balance is found by maintaining independent secrets management systems, which operate cohesively but remain distinct from operational systems. 


Why cloud repatriation is back on the CIO agenda

Cost pressure often stems from workload shape. Steady, always-on services do not benefit from pay-as-you-go pricing. Rightsizing, reservations and architecture optimization will often close the gap, yet some services still carry a higher unit cost when they remain in public cloud. A placement change then becomes a sensible option. Three observations support a measurement-first approach. Many organizations report that managing cloud spend is their top challenge; egress fees and associated patterns affect a growing share of firms, and the finops community places unit economics and allocation at the centre of cost accountability. ... Public cloud remains viable for many regulated workloads, assisted by sovereign configurations. Examples include the AWS European Sovereign Cloud (scheduled to be released at the end of 2025), the Microsoft EU Data Boundary and Google’s sovereign controls and partner offerings. These options have scope limits that should be assessed during design. Public cloud remains viable for many regulated workloads when sovereign configurations meet requirements. ... Repatriation tends to underperform where workloads are inherently elastic or seasonal, where high-value managed services would need to be replicated at significant opportunity cost, where the organization lacks the run maturity for private platforms, or where the cost issues relate primarily to tagging, idle resources or discount coverage that a FinOps reset can address.


Colocation meets regulation

While there have been many instances of behind-the-meter agreements in the data center sector, the AWS-Talen agreement differed in both scale and choice of energy. Unlike previous instances, often utilizing onsite renewables, the AWS deal involved a regional key generation asset, which provides consistent and reliable power to the grid. As a result, to secure the go-ahead, PJM Interconnection, the regional transmission operator in charge of the utility services in the state, had to apply for an amendment to the plant's existing Interconnection Service Agreement (ISA), permitting the increased power supply. However, rather than the swift approval the companies hoped for, two major utilities that operate in the region, Exelon and American Electric Power (AEP), vehemently opposed the amended ISA, submitting a formal objection to its provisions. ... Since the rejection by FERC, Talen and AWS have reimagined the agreement, with it moving from behind to an in-front-of-the-meter arrangement. The 17-year PPA will see Talen supply AWS with 1.92GW of power, ramped up over the next seven years, with the power provided through PJM. This reflects a broader move within the sector, with both Talen and nuclear energy generator Constellation indicating their intention to focus on grid-based arrangements going forward. Despite this, Phillips still believes that under the correct circumstances, colocation can be a powerful tool, especially for AI and hyperscale cloud deployments seeking to scale quickly.


Employees learn nothing from phishing security training, and this is why

Phishing training programs are a popular tactic aimed at reducing the risk of a successful phishing attack. They may be performed annually or over time, and typically, employees will be asked to watch and learn from instructional materials. They may also receive fake phishing emails sent by a training partner over time, and if they click on suspicious links within them, these failures to spot a phishing email are recorded. ... "Taken together, our results suggest that anti-phishing training programs, in their current and commonly deployed forms, are unlikely to offer significant practical value in reducing phishing risks," the researchers said. According to the researchers, a lack of engagement in modern cybersecurity training programs is to blame, with engagement rates often recorded as less than a minute or none at all. When there is no engagement with learning materials, it's unsurprising that there is no impact. ... To combat this problem, the team suggests that, for a better return on investment in phishing protection, a pivot to more technical help could work. For example, imposing two or multi-factor authentication (2FA/MFA) on endpoint devices, and enforcing credential sharing and use on only trusted domains. That's not to say that phishing programs don't have a place in the corporate world. We should also go back to the basics of engaging learners. 


SOC teams face 51-second breach reality—Manual response times are officially dead

When it takes just 51 seconds for attackers to breach and move laterally, SOC teams need more help. ... Most SOC teams first aim to extend ROI from existing operations investments. Gartner's 2025 Hype Cycle for Security Operations notes that organizations want more value from current tools while enhancing them with AI to handle an expansive threat landscape. William Blair & Company's Sept. 18 note on CrowdStrike predicts that "agentic AI potentially represents a 100x opportunity in terms of the number of assets to secure," with TAM projected to grow from $140 billion this year to $300 billion by 2030. ... Kurtz's observation reflects concerns among SOC leaders and CISOs across industries. VentureBeat sees enterprises experimenting with differentiated architectures to solve governance challenges. Shlomo Kramer, co-founder and CEO of Cato Networks, offered a complementary view in a VentureBeat interview: "Cato uses AI extensively… But AI alone can't solve the range of problems facing IT teams. The right architecture is important both for gathering the data needed to drive AI engines, but also to tackle challenges like agility, connecting enterprise edges, and user experience." Kramer added, "Good AI starts with good data. Cato logs petabytes weekly, capturing metadata from every transaction across the SASE Cloud Platform. We enrich that data lake with hundreds of threat feeds, enabling threat hunting, anomaly detection, and network degradation detection."


Timeless inclusive design techniques for a world of agentic AI

Progressive enhancement and inclusive design allow us to design for as many users as possible. They are core components of user-centered design. The word "user" often hides the complex magnificence of the human being using your product, in all their beautiful diversity. And it’s this rich diversity that makes inclusive design so important. We are all different, and use things differently. While you enjoy that sense of marvel at the richness and wonder of your users' lives, there is no need to feel it for AI agents. These agents are essentially just super-charged "stochastic parrots" (to borrow a phrase from esteemed AI ethicist and professor of Computational Linguistics Emily M. Bender) guessing the next token. ... Every breakthrough since we learnt to make fire has been built on what came before. Isaac Newton said he could only see so far because he was "standing on the shoulders of giants". The techniques and approaches needed to enable this new wave of agent-powered AI devices have been around for a long time. But they haven't always been used. In our desire to ship the shiniest features, we often forget to make our products work for people who rely on accessibility features. ... Patterns are things like adding a "skip to content link" and implementing form validation in a way that makes it easier to recover from errors. Alongside patterns, there are a wealth of freely available accessibility testing tools that can tell you if your product is meeting necessary standards.


Stronger Resilience Starts with Better Dependency Mapping

As recent disruptions made painfully clear, you cannot manage what you cannot see. When a single upstream failure ripples through eligibility checks, billing, scheduling, or clinical systems, executives need answers in minutes, not months. Who is impacted? What services are degraded? Which applications are truly critical? What are our fourth-party exposures? In too many organizations, those answers require a scavenger hunt. ... Modern operations rely on external platforms for authorizations, payments, data enrichment, analytics, and communications, yet many organizations stop their mapping at the data center boundary. That blind spot creates serious risk, since a single vendor outage can ripple across multiple critical services. Regulators are responding. In the U.S., the OCC, Federal Reserve, and FDIC’s 2023 Interagency Guidance on Third-Party Risk Management requires banks to identify and monitor critical vendor relationships, including subcontractors and concentration risks. ... Dependency data without impact data is trivia. Mapping is only valuable when assets and services are tied to business impact analysis (BIA) outputs like recovery time objectives and maximum tolerable downtime. Without this, leaders face a flat picture of connections but no way to prioritize what to restore first, or how long they can operate without a service before consequences cascade.