Daily Tech Digest - September 29, 2025


Quote for the day:

"Remember that stress doesn't come from what is going on in your life. It comes from your thoughts on what is going on in your life." -- Andrew Bernstein



Agentic AI in IT security: Where expectations meet reality

The first decision regarding AI agents is whether to layer them onto existing platforms or to implement standalone frameworks. The add-on model treats agents as extensions to security information and event management (SIEM), security orchestration, automation and response (SOAR), or other security tools, providing quick wins with minimal disruption. Standalone frameworks, by contrast, act as independent orchestration layers, offering more flexibility but also requiring heavier governance, integration, and change management. ... Agentic AI adoption rarely happens overnight. As Checkpoint’s Weigman puts it, “Most security teams aren’t swapping out their whole SOC for some shiny new AI system, and one can understand that: It’s expensive, and it demands time and human effort, which at the end of the day could appear be too disruptive and costly.” Instead, leaders look for ways to incrementally layer new capabilities without jeopardizing ongoing operations, which makes pilots a common first step. ... “An agent designed to carry out a sequence of actions in response to a threat could inadvertently create new risks if misused or deployed inappropriately,” says Goje. “For instance, there’s potential for unregulated scripts or newly discovered vulnerabilities.” ... “Pricing remains a friction point,” says Fifthelement.ai’s Garini. “Vendors are playing with usage-based models, but organizations are finding value when they tie spend to analyst hours saved rather than raw compute or API calls.”


Anthropic, surveillance and the next frontier of AI privacy

Democratic legal systems are built on due process: Law enforcement must have grounds to investigate. Surveillance is meant to be targeted, not generalized. Allowing AI to conduct mass, speculative profiling would invert that principle, treating everyone as a potential suspect and granting AI the power to decide who deserves scrutiny. By saying “no” to this use case, Anthropic has drawn a red line. It is asserting that there are domains where the risk of harm to civil liberties outweighs the potential utility. ... How much should technology companies be able to control how their products are used, particularly once they are sold into government? Better yet, do they have a responsibility to ensure their products are used as intended? There is no easy answer. Enforcement of “terms of service” in highly sensitive contexts is notoriously difficult. A government agency may purchase access to an AI model and then apply it in ways that the provider cannot see or audit. ... The real challenge ahead is to establish publicly accountable frameworks that balance security needs with fundamental rights. Surveillance powered by AI will be more powerful, more scalable and more invisible than anything that came before. It has enormous potential when it comes to national security use cases. Yet without clear limits, it threatens to normalize perpetual, automated suspicion.


How attackers poison AI tools and defenses

AI systems that act with a high degree of autonomy carry another risk: impersonating users or trusting impostors. One tactic is known as a “Confused Deputy” attack. Here, an AI agent with high privileges performs a task on behalf of a low-privileged attacker. Another involves spoofed API access, where attackers trick integrations with services like Microsoft 365 or Gmail into leaking information or sending fraudulent emails. ... One crucial step is to make filters aware of how LLMs generate content, so they can flag anomalies in tone, behavior or intent that might slip past older systems. Another is to validate what AI systems remember over time. Without that check, poisoned data can linger in memory and influence future decisions. Isolation also matters. AI assistants should run in contained environments where unverified actions are blocked before they can cause damage. Identity management needs to follow the principle of least privilege, giving AI integrations only the access they require. Finally, treat every instruction with skepticism. Even routine requests must be verified before execution if zero-trust principles are to hold. ... The next wave of threats will involve agentic AI-powered systems that reason, plan and act on their own. While these tools can deliver tremendous productivity gains to users, their autonomy makes them attractive targets. If attackers succeed in steering an agent, the system could make decisions, launch actions or move data undetected.


‘AI and ML the main focus in tech right now’

AI and machine learning are undoubtedly the main focuses in technology right now, with mentions everywhere. A great way to upskill in this area is by attending talks and seminars, which are frequently held and provide valuable insights into how these technologies are being applied in the industry. These events also help you stay up to date on the latest developments. If you have a strong interest in the field, taking an online course, even a free one, can be a great way to grasp the fundamentals, learn the terminology, and understand how to effectively apply these technologies in your current role. Cloud technology is another area that’s here to stay. It’s widely adopted and incredibly versatile. Cloud certifications are highly accessible, with plenty of resources available to help you prepare for the exams and follow the learning paths they offer. ... Being a people person is incredibly beneficial in this field. A significant part of the job involves communication – whether it’s sharing ideas or networking with coworkers in your area. Building these connections can greatly enhance your ability to perform and succeed in your role. Problem-solving is another key aspect of software engineering, and it’s something I’ve always enjoyed. While it can be particularly challenging at times, the sense of accomplishment and reward when your efforts pay off is unmatched.


Better Data Beats Better Models: The Case for Data Quality in ML

Data quality is a broad and abstract concept, but it becomes more measurable when we break it down into different dimensions. Accuracy is the most important and obvious one: If the input data is wrong (e.g., mislabeled transactions in fraud detection models), the model will simply learn incorrect patterns. Completeness is equally important. Without a high degree of coverage for important features, the model will lack context and produce weaker predictions. For example, a recommender system missing key user attributes will fail to provide personalized recommendations. Freshness plays a subtle but powerful role in data quality. Outdated data appears correct, but does not reflect real-world conditions. ... Detecting data quality issues is not just about a single check but rather about continuous monitoring. Statistical distribution checks are the first line of defense, helping detect anomalies or sudden shifts that can indicate broken data pipelines. ... Ignoring data quality can often turn out to be very expensive. Teams spend large amounts of compute to retrain models on flawed data, to observe little to no business impact. Launch timelines get pushed back since teams spend weeks debugging data issues, a time that could have been spent otherwise on feature development. In industries that are regulated, like finance and healthcare, poor data quality can cause compliance violations and increased legal expenses.


DORA 2025: Faster, But Are We Any Better?

The newest DORA report — the “State of AI-Assisted Software Development” — lands at a time when AI is eating everything from code generation to documentation to operations. And just like those early DORA reports reframed speed versus stability, this one is reframing what AI is actually doing to our software delivery pipelines. Spoiler alert: It’s not as simple as “AI makes everything better.” ... Now here’s the counterintuitive part. For the first time, DORA shows AI adoption is linked to higher throughput. That’s right — teams using AI are moving work through the system faster than those who aren’t. But before you pop the champagne, look at the other half of the finding: Instability is still higher in AI-heavy teams. Faster, yes. Safer? Not so much. If you’ve been around the block, this won’t shock you. We saw the same thing in the early days of automation — speed without discipline just meant you hit the wall quicker. ... Another gem buried in the report is the role of value stream management. AI tends to deliver “local optimizations” — an engineer codes faster, a test suite runs quicker — but without VSM, those wins don’t always roll up into business outcomes. With VSM in place, AI-driven productivity gains translate into measurable improvements at the team and product level. That, to me, is vintage DORA. Remember when they proved that culture — psychological safety, autonomy, collaboration — wasn’t just a warm fuzzy HR concept but directly correlated with elite performance? Same here. VSM turns AI from a toy into a force multiplier.


The 5 Technology Trends For 2026 Everyone Must Prepare For Now

In recent years, we've seen industry, governments, education and everyday folk scrambling to adapt to the disruptive impact of AI. But by 2026, we're starting to get answers to some of the big questions around its effect on jobs, business and day-to-day life. Now, the focus shifts from simply reacting to reinventing and reshaping in order to find our place in this brave, different and sometimes frightening new world. ... In tech, agents were undoubtedly the hot buzzword of 2025, representing a meaningful evolution over previous AI applications like chatbots and generative AI. Rather than simply answering questions and generating content, agents take action on our behalf, and in 2026, this will become an increasingly frequent and normal occurrence in everyday life. From automating business decision-making to managing and coordinating hectic family schedules, AI agents will handle the “busy work” involved in planning and problem-solving, freeing us up to focus on the big picture or simply slowing down and enjoying life. ... Quantum computing harnesses the strange and seemingly counterintuitive behavior of particles at the sub-atomic level to accomplish many complex computing tasks millions of times faster than "classic" computers. For the last decade, there's been excitement and hype over their performance in labs and research environments, but in 2026, we are likely to see further adoption in the real world. 


GreenOps and FinOps: Strategic Convergence in the Cloud Transformation Journey

FinOps, short for “Financial Operations,” is a cultural practice designed to bring financial accountability to the cloud. It blends engineering, finance, and business teams to manage cloud costs collaboratively and transparently. The goal is clear: maximize business value from the cloud by making spending decisions grounded in data and aligned with business objectives. ... GreenOps, on the other hand, is all about sustainability in cloud operations. It’s a discipline that encourages organizations to monitor, manage, and minimize the environmental footprint of their cloud usage. GreenOps revolves around using renewable energy-powered cloud resources, recycling or reusing digital assets, optimizing workloads, and selecting eco-friendly services, all with the aim of reducing carbon emissions and supporting broader sustainability goals. ... In practical terms, GreenOps activities such as deleting unused storage volumes, rightsizing virtual machines, and consolidating workloads not only shrink the carbon footprint but also slash monthly cloud bills. Thus, sustainability efforts act as “passive” cost optimizers—delivering FinOps benefits without explicit financial tracking. ... FinOps and GreenOps aren’t one-off projects but ongoing practices. Regular reviews, “cost and sustainability audits,” and optimization sprints keep teams focused. 


Rethinking AI’s Role in Mental Health with GPT-5

GPT-5 has surfaced critical questions in the AI mental health community: What happens when people treat a general purpose chatbot as a source of care? How should companies be held accountable for the emotional effects of design decisions? What responsibilities do we bear, as a health care ecosystem, in ensuring these tools are developed with clinical guardrails in place? ... OpenAI has since taken steps to restore user confidence by making its personality “warmer and friendlier,” and encouraging breaks during extended sessions. However, it doesn’t change the fact that ChatGPT was built for engagement, not clinical safety. The interface may feel approachable, especially appealing to those looking to process feelings around high-stigma topics – from intrusive thoughts to identity struggles – but without thoughtful design, that comfort can quickly become a trap. ... Designing for engagement alone won’t get us there, and we must design for outcomes rooted in long-term wellbeing. At the same time, we should broaden our scope to include AI systems that shape the care experience, such as reducing the administrative burden on clinicians by streamlining billing, reimbursement, and other time-intensive tasks that contribute to burnout. Achieving this requires a more collaborative infrastructure to help shape what that looks like, and co-create technology with shared expertise from all corners of the industry including AI ethicists, clinicians, engineers, researchers, policymakers and users themselves.


Cybersecurity skills shortage: can upskilling close the talent gap?

According to reports, the global cybersecurity workforce gap exceeded 4 million professionals in 2023, with India alone requiring more than 500,000 skilled experts to meet current demand. This shortage is not merely a hiring challenge; it is a business risk. ... The traditional answer to talent shortages has been to hire more people. But in cybersecurity, where demand far outstrips supply, hiring alone cannot solve the problem. Upskilling training existing employees to meet evolving requirements offers a sustainable solution. Upskilling is not about starting from scratch. It leverages existing talent pools, such as IT administrators, network engineers, or even software developers, and equips them with cybersecurity expertise. ... While technology plays a central role in cybersecurity, the human factor remains the ultimate line of defense. Many high-profile breaches stem not from technical weaknesses but from human errors such as phishing clicks or misconfigured systems. Upskilling programs must therefore go beyond technical mastery to also emphasise behavioral awareness, ethical responsibility, and decision-making under pressure. ... The cybersecurity talent gap is unlikely to vanish overnight. However, the organisations that will thrive are those that view the challenge not as a bottleneck but as an opportunity to reimagine workforce development. Upskilling is the most pragmatic path forward, enabling companies to build resilience, retain talent, and remain competitive in an era of escalating cyber risks.

Daily Tech Digest - September 28, 2025


Quote for the day:

“Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through.” -- Jarod Kintz


What happens when AI becomes the customer?

If the first point of contact is no longer a person but an AI agent, then traditional tactics like branding, visual merchandising or website design will have reduced impact. Instead, the focus will move to how easily machines can find and understand product information. Retailers will need to ensure that data, from specifications and availability to pricing and reviews, is accurate, structured and optimised for AI discovery. Products will no longer be browsed by humans but scanned and filtered by autonomous systems making selections on someone else’s behalf. ... This trend is particularly strong among younger and higher-income consumers. People under 35 are far more likely to use AI throughout the buying process, particularly for everyday items like groceries, toiletries and clothes. For this group, convenience matters. Many are comfortable letting technology take over simple tasks, and when it comes to low cost, low risk products, they’re happy for AI to handle the entire purchase. ... These developments point to the rise of the agentic internet – a world in which AI agents become the main way consumers interact with brands. As these tools search, compare, buy and manage products on users’ behalf, they will reshape how visibility, loyalty and influence work. Retailers have less than five years to respond. That means investing in clean, structured product data, adapting automation where it’s welcomed, and keeping the human touch where trust matters. 


The overlooked cyber risk in data centre cooling systems

Data centre operations are critically dependent on a complex ecosystem of OT equipment, including HVAC and building management systems. As operators adopt closed-loop and waterless cooling to improve efficiency, these systems are increasingly tied into BMS and DCIM platforms. This expands the attack surface of networks that were once more segmented. A compromise of these systems could directly affect temperature, humidity or airflow, with clear implications for the availability of services that critical infrastructure asset owners rely on. ... Resilience also depends on secure remote access, including multi-factor authentication and controlled jump-host environments for vendors and third parties. Finally, risk-based vulnerability management ensures that critical assets are either patched, mitigated, or closely monitored for exploitation, even where systems cannot easily be taken offline. Taken together, these controls provide a framework for protecting data centre cooling and building systems without slowing the drive for efficiency and innovation. ... As the UK expands its data centre capacity to fuel AI ambitions and digital transformation, cybersecurity must be designed into the physical systems that keep those facilities stable. Cooling is not just an operational detail. It is a potential target — and protecting it is essential to ensuring the sector’s growth is sustainable, resilient, and secure.


Rethinking Regression Testing with Change-to-Test Mapping

Regression testing is essential to software quality, but in enterprise projects it often becomes a bottleneck. Full regression suites may run for hours, delaying feedback and slowing delivery. The problem is sharper in agile and DevOps, where teams must release updates daily. ... The need for smarter regression strategies is more urgent than ever. Modern software systems are no longer monoliths; they are built from microservices, APIs, and distributed components, each evolving quickly. Every code change can ripple across modules, making full regressions increasingly impractical. At the same time, CI/CD costs are rising sharply. Cloud pipelines scale easily but generate massive bills when regression packs run repeatedly. ... The core idea is simple: “If only part of the code changes, why not run only the tests covering that part?” Change-to-test mapping links modified code to the relevant tests. Instead of running the entire suite on every commit, the approach executes a targeted subset – while retaining safeguards such as safety tests and fallback runs. What makes this approach pragmatic is that it does not rely on building a “perfect” model of the system. Instead, it uses lightweight signals – such as file changes, annotations, or coverage data – to approximate the most relevant set of tests. Combined with guardrails, this creates a balance: fast enough to keep up with modern delivery, yet safe enough to trust in production-grade environments.


Is A Human Touch Needed When Compliance Has Automation?

Even with technical issues, automation may highlight missing patches, but humans are the ones who must prioritize fixes, coordinate remediation, and validate that vulnerabilities are closed. Audits highlight this divide even more clearly. Regulators rarely accept a data dump without explanation. Compliance officers must be able to explain how controls work, why exceptions exist, and what is being done to address them. Without human review, automated alerts risk creating false positives, blind spots, or alert fatigue. Perhaps most critically, over-dependence on automation can erode institutional knowledge, leaving teams unprepared to interpret risk independently. ... By eliminating repetitive evidence collection, teams gain the capacity to analyze training effectiveness, scenario-plan future threats, and interpret regulatory changes. Automation becomes not a replacement for people, but a multiplier of their impact. ... Over-reliance on automation carries its own risks. A clean dashboard may mask legacy systems still in production or system blind spots if a monitoring tool goes down. Without active oversight, teams may not discover gaps until the next audit. There’s also the danger of compliance becoming a “black box,” where staff interact with dashboards but never learn how to evaluate risk themselves. CIOs need to actively design against these vulnerabilities.


14 Challenges (And Solutions) Of Filling Fractional Leadership Roles

Filling a fractional leadership role is tough when companies underestimate the expertise required to thrive in such a role. Fractional leaders need both autonomy and seamless integration with key stakeholders. ... One challenge of fractional leadership is grasping the company culture and processes with limited time on site. Without that context, even the most skilled leader can struggle to drive meaningful change or build credibility. ... Finding the right culture fit for a fractional leadership role can be challenging. High-performing leadership teams are tight-knit ecosystems, and a fractional leader’s challenges with breaking into them and fitting into their culture can be daunting. ... One challenge is unrealistic expectations—wanting full-time availability at part-time cost. The key is to define scope, decision rights and deliverables upfront. Treat fractional leaders as strategic partners, not stopgaps. Clear onboarding and aligned incentives are essential to driving value and trust. ... A common hurdle with fractional roles is misaligned expectations—impact is needed fast, but boundaries and authority aren’t always defined. The fix? Be upfront: outline goals, decision-making limits and integration plans early so leaders can add value quickly without friction.


Will the EU Designate AI Under the Digital Markets Act?

There are two main ways in which the DMA will be relevant for generative AI services. First, a generative AI player may offer a core platform service and meet the gatekeeper requirements of the DMA. Second, generative AI-powered functionalities may be integrated or embedded in existing designated core platform services and therefore be covered by the DMA obligations. Those obligations apply in principle to the entire core platform service as designated, including features that rely on generative AI. ... Cloud computing is already listed as a core platform service under the DMA, and thus, designating cloud services would be a much faster process than creating a new core platform service category. Michelle Nie, a tech policy researcher formerly with the Open Markets Institute, says the EU should designate cloud providers to tackle the infrastructural advantages held by gatekeepers. Indeed, she has previously written for Tech Policy Press that doing so “would help address several competitive concerns like self-preferencing, using data from businesses that rely on the cloud to compete against them, or disproportionate conditions for termination of services.” ... Introducing contestability and fairness, the stated goals of the DMA, into digital ecosystems increasingly relied on by private and public institutions could not be more critical. 


The Looming Authorization Crisis: Why Traditional IAM Fails Agentic AI

From copilots booking travel to intelligent agents updating systems and coordinating with other bots, we’re stepping into a world where software can reason, plan, and operate with increasing autonomy.This shift brings immense promise and significant risk. The identity and access management (IAM) infrastructures that we rely upon today were built for people and fixed service accounts. They weren’t designed to manage self-directing, dynamic digital agents. And yet that’s what Agentic AI demands. ... The road to a comprehensive and internationally accessible Agentic AI IAM framework is a daunting task. The rapid pace of AI development demands accelerated IAM security guidance, especially for heavily regulated sectors. Continued research, continued development of standards, and rigorous interoperability are required to prevent fragmentation into incompatible identity silos. We must also address the ethical issues, such as bias detection and mitigation in credentials, and offer transparency and explainability of IAM decisions. ... The stakes are high. Without a comprehensive plan for managing these agents—one that tracks who they are, what they can perceive, and when their permissions expire—we risk disaster through way of complexity and compromise. Identity remains the foundation of enterprise security, and its scope must reach rapidly to shield the autonomous revolution.


How immutability tamed the Wild West

One of the first lessons that a new programmer should learn is that global variables are a crime against all that is good and just. If a variable is passed around like a football, and its state can change anywhere along the way, then its state will change along the way. Naturally, this leads to hair pulling and frustration. Global variables create coupling, and deep and broad coupling is the true crime against the profession. At first, immutability seems kind of crazy—why eliminate variables? Of course things need to change! How the heck am I going to keep track of the number of items sold or the running total of an order if I can’t change anything? ... The key to immutability is understanding the notion of a pure function. A pure function is one that always returns the same output for a given input. Pure functions are said to be deterministic, in that the output is 100% predictable based on the input. In simpler terms, a pure function is a function with no side effects. It will never change something behind your back. ... Immutability doesn’t mean nothing changes; it means values never change once created. You still “change” by rebinding a name to a new value. The notion of a “before” and “after” state is critical if you want features like undo, audit tracing, and other things that require a complete history of state. Back in the day, GOSUB was a mind-expanding concept. It seems so quaint today. 


What Lessons Can We Learn from the Internet for AI/ML Evolution?

One of the defining principles of the Internet was to keep the core simple and push the intelligence to the edge. The network and its host computers just simply delivered packets reliably without dictating or controlling applications. That principle enabled the explosion of the Web, streaming, and countless other services. In AI, similar principles should be considered. Instead of centralizing everything in “one foundational model”, we should empower distributed agents and edge intelligence. Core infrastructure should stay simple and robust, enabling diverse use cases on top. ... One of the most important lessons of all from the Internet is that there be no single company nor government-owned or controlled TCP/IP stack. It is neutral governance that created global trust and adoption. Institutions such as ICANN, and the regional Internet registries (RIRs) played a key role by managing domain names and IP address assignments in an open and transparent way, ensuring that resources were allocated fairly across geographies. This kind of neutral stewardship allowed the Internet to remain interoperable and borderless. On the other hand, today’s AI landscape is controlled by a handful of big-tech companies. To scale AI responsibly, we will need similar global governance structures—an “IETF for AI,” complemented by neutral registries that can manage shared resources such as model identifiers, agent IDs, coordinating protocols, among others.


Digital Transformation: Investments Soar, But Cyber Risks (Often) Outpace Controls

With the accelerating digital transformation, periodic security and compliance reviews are obsolete. Nelson emphasizes the need for “continuous assessment—continuous monitoring of privacy, regulatory, and security controls,” with automation used wherever feasible. Third-party and supply-chain risk must be continuously monitored, not just during vendor onboarding. Similarly, asset management can no longer be neglected, as even overlooked legacy devices—like unpatched Windows XP machines in manufacturing—can serve as vectors for persistent threats. Effective governance is crucial to enhancing security during periods of rapid digital transformation, Nelson emphasized. By establishing robust frameworks and clear policies for acceptable use, organizations can ensure that new technologies, such as AI, are adopted responsibly and securely. ... Maintaining cybersecurity within Governance, Risk, and Compliance (GRC) programs helps keep security from being a reactive cost center, as security measures are woven into the digital strategy from the outset, rather than being retrofitted. And GRC frameworks provide real-time visibility into organizational risks, facilitate data-driven decision-making, and create a culture where risk awareness coexists with innovation. This harmony between governance and digital initiatives helps businesses navigate the digital landscape while ensuring their operations remain secure, compliant, and prepared to adapt to change.

Daily Tech Digest - September 27, 2025


Quote for the day:

"The starting point of all achievement is desire." -- Napolean Hill


Senate Bill Seeks Privacy Protection for Brain Wave Data

The senators contend that a growing number of consumer wearables and devices "are quietly harvesting sensitive brain-related data with virtually no oversight and no limits on how it can be used." Neural data, such as brain waves or signals from neural implants can potentially reveal thoughts, emotions or decisions-making patterns that could be collected and used by third parties, such as data brokers, to manipulate consumers and even potentially threaten national security, the senators said. ... Colorado defines neural data "as information that is generated by the measurement of the activity of an individual's central or peripheral nervous systems and that can be processed by or with the assistance of a device,'" Rose said. Neural data is a subcategory of "biological data," which Colorado defines as "data generated by the technological processing, measurement, or analysis of an individual's biological, genetic, biochemical, physiological, or neural properties, compositions, or activities or of an individual's body or bodily functions, which data is used or intended to be used, singly or in combination with other personal data, for identification purposes," she said. ... Neuralink is currently in clinical trials for an implantable, wireless brain device designed to interpret a person's neural activity. The device is designed to help patients operate a computer or smartphone "by simply intending to move - no wires or physical movement are required." 


The hidden cyber risks of deploying generative AI

Unfortunately, organizations aren’t thinking enough about security. The World Economic Forum (WEF) reports that 66% of organizations believe AI will significantly affect cybersecurity in the next 12 months, but only 37% have processes in place to assess AI security before deployment. Smaller businesses are even more exposed—69% lack safeguards for secure AI deployment, such as monitoring training data or inventorying AI assets. Accenture finds similar gaps: 77% of organizations lack foundational data and AI security practices, and only 20% express confidence in their ability to secure generative AI models. ... Both WEF and Accenture emphasize that the organizations best prepared for the AI era are those with integrated strategies and strong cybersecurity capabilities. Accenture’s research shows that only 10% of companies have reached what it calls the “Reinvention-Ready Zone,” which combines mature cyber strategies with integrated monitoring, detection and response capabilities. Firms in that category are 69% less likely to experience AI-powered cyberattacks than less prepared organizations. ... For enterprises, the path forward is about balancing ambition with caution. AI can boost efficiency, creativity and competitiveness, but only if deployed responsibly. Organizations should make AI security a board-level priority, establish clear governance frameworks, and ensure their cybersecurity teams are trained to address emerging AI-driven threats.


7 hard-earned lessons of bad IT manager hires

Hiring IT managers is difficult. You are looking for a unicorn-like set of skills: the technical acuity to understand projects and guide engineers, the people skills to do so without ruffling feathers, and a leadership mindset that can build a team and take it in the right direction. Hiring for any tech role can be fraught with peril — with IT managers it’s even more so. One recent study found that 87% of technology leaders are struggling to find talent that has the skills they need. And when they do find that rare breed, it’s often not as perfect as it first seemed. Deloitte’s 2025 Global Human Capital Trends survey found that, for two-thirds of managers and executives, recent hires did not have what was needed. Given this landscape, you’re bound to make mistakes. But you don’t have to make all of them yourself. You can learn from what others have experienced and go into this effort with hard-won experience — even if it isn’t your own. ... Managing that many people is crushing. “It’s hard to keep track of what they’re all working on or how to set them up for success,” Mishra says. “I saw signs of dysfunction. People felt directionless and were getting blocked. Some brilliant engineers were taking on manager tasks because I was in back-to-back meetings and firefighting all the time. Productivity lowered because my top performers were doing things not natural to them.”


When Your CEO’s Leadership Creates Chaos

By speaking her CEO’s language, she shifted from being perceived as obstructive to being seen as a trusted advisor. Leaders are far more receptive when ideas connect directly to their stated priorities. Test every message against your CEO’s core priorities, growth, clients, investors, or whatever drives them. Reinforce your case with external validation such as market data, board expectations, or customer benchmarks. ... Fast-moving CEOs often create organizational whiplash by revisiting decisions or overruling execution midstream. Ambiguity fuels frustration. The antidote is building explicit agreements, which reduces micromanagement while preserving momentum. ... To avoid overlap and blind spots, the group divided responsibilities into distinct categories: customer acquisition, customer retention, and operational efficiency. Together, they then presented a unified, comprehensive strategy to the CEO. This not only made their recommendations harder to dismiss but also replaced a sense of isolation with coordinated leadership. Informal dinners, side meetings, and peer check-ins strengthened the coalition and amplified their collective voice. ... At the offsite, Alex connected her weekly progress updates to a broader organizational direction-setting check-in: revisiting the vision, identifying big moves, reallocating resources, and choosing one operating principle to shift. This kept her updates both visible and tied to strategy. 


From outdated IT to smart modern workplaces: how to do that?

Many organizations still run critical systems on-premises, while at the same time wanting to use cloud applications. As a result, traditional management with domains and Group Policy Objects (GPOs) is slowly disappearing. Microsoft Intune offers an alternative, but in practice, it is less streamlined. “What you used to manage centrally with GPOs now has to be set up in different places in Intune,” explains Van Wingerden. ... A hybrid model inevitably involves more complex budgeting. Costs for virtual machines, storage, or licenses only become apparent over time, which means financial surprises are lurking. Technical factors also play a role. Some applications perform better locally due to latency or regulations, while others benefit from cloud scalability. The result? ... The traditional closed workplace no longer suffices in this new landscape. Zero Trust is becoming the starting point, with dynamic verification per user and context. “We can say: based on the user’s context, we make things possible or impossible within that Windows workplace,” says Van Wingerden. Think of applications that run locally at the office but are available as remote apps when working from home. This creates a balance between ease of use and security. This context-sensitive approach is sorely needed. Cybercriminals are increasingly targeting endpoints and user accounts, where traditional perimeters fall short. 


Cisco Firewall Zero-Days Exploited in China-Linked ArcaneDoor Attacks

“Attackers were observed to have exploited multiple zero-day vulnerabilities and employed advanced evasion techniques such as disabling logging, intercepting CLI commands, and intentionally crashing devices to prevent diagnostic analysis,” Cisco explains. While it has yet to be confirmed by the wider cybersecurity community, there is some evidence suggesting that the hackers behind the ArcaneDoor campaign are based in China. ... Users are advised to update their devices as soon as possible, as the fixed release will automatically check the ROM and remove the attackers’ persistence mechanism. Users are also advised to rotate all passwords, certificates, and keys following the update. “In cases of suspected or confirmed compromise on any Cisco firewall device, all configuration elements of the device should be considered untrusted,” Cisco notes. The company also released a detection guide to help organizations hunt for potential compromise associated with the ArcaneDoor campaign. ... An attacker could exploit this vulnerability by sending crafted HTTP requests to a targeted web service on an affected device after obtaining additional information about the system, overcoming exploit mitigations, or both. A successful exploit could allow the attacker to execute arbitrary code as root, which may lead to the complete compromise of the affected device,” the company notes.


5 ways you can maximize AI's big impact in software development

Tony Phillips, engineering lead for DevOps services at Lloyds Banking Group, said his firm is running a program called Platform 3.0, which aims to modernize infrastructure and lay the groundwork for adopting AI. He said the next step is to move beyond using AI to assist with coding and to boost all areas of the development process. "We are creating productivity boosts in our developer community, but we are now looking at how we take that forward across the rest of the pipeline for what we ship." ... He said the bank's initial explorations into AI suggest that learning from experiences is an important best practice. "There's always a balance, because you've got to let people get hold of the technology, put it in their context of what they're doing, and then understand what good looks like," he said. "Then you've got to build the capacity for what gets fed back so that you can respond quickly." ... Like others, Terry said governance is crucial. Give developers feedback when they take non-compliant actions -- and AI might help with this process. "We have a lot of different platforms and maybe haven't created a dotted line between all the platforms," he said. "AI might be the opportunity to do that and give developers the chance to do the right thing from the beginning." ... Terry also referred to the rise of vibe coding and suggested it shouldn't be used by people who have just begun coding in an enterprise setting.


Ethical cybersecurity practice reshapes enterprise security in 2025

The tension between innovation and risk management represents an important challenge for modern organisations. Push too hard for innovation without adequate safeguards and companies risk data breaches and compliance violations. Focus too heavily on risk mitigation, and organisations may find themselves unable to compete in evolving markets. ... The ethical AI component emphasises explainability. Rather than generating “black box” alerts, ManageEngine’s systems explain their reasoning. An alert might read: “The endpoint cannot log in at this time and is trying to connect to too many network devices.” ... The balance between necessary security monitoring and privacy invasion represents one of the most delicate aspects of ethical cybersecurity practices. Raymond acknowledges that while proactive monitoring is essential for detecting threats early, over-monitoring risks creating a surveillance environment that treats employees as suspects rather than trusted partners. ... For organisations seeking to integrate ethical considerations into their cybersecurity strategies, Raymond recommends three concrete steps: adopting a cybersecurity ethics charter at the board level, embedding privacy and ethics in technology decisions when selecting vendors, and operationalising ethics through comprehensive training and controls that explain not just what to do, but why it matters.


What is infrastructure as code? Automating your infrastructure builds

Infrastructure as code is a practice of writing plain-text declarative configuration files that automated tools use to manage and provision servers and other computing resources. In the pre-cloud days, sysadmins would often customize the configuration of individual on-premises server systems; but as more and more organizations move to the cloud, those skills became less relevant and useful. ... and Puppet founder Luke Kanies started to use the terminology. In a world of distributed applications, hand-tuning servers was never going to scale, and scripting had its own limitations, so being able to automate infrastructure provisioning became a core need for many first movers back in the early days of cloud. Today, that underlying infrastructure is more commonly provisioned as code, thanks to popular early tools in this space such as Chef, Puppet, SaltStack, and Ansible. ... But the neat boundaries between tools and platforms have blurred, and many enterprises no longer rely on a single IaC solution, but instead juggle multiple tools across different teams or cloud providers. For example, Terraform or OpenTofu may provision baseline resources, while Ansible handles configuration management, and Kubernetes-native frameworks like Crossplane provide a higher layer of abstraction. This “multi-IaC” reality introduces new challenges in governance, dependency management, and avoiding configuration drift.


Software Upgrade Interruptions: The New Challenge for Resilience

The growing cost of upgrade outages derives from three interwoven sources. First, increased digitization of activities means that applications entirely reliant on computational capacity are handling more of our daily activities. Second, as centrally managed cloud-based data storage and application hosting replace local storage and processing on phones, local servers, and computers, functions once susceptible to failures of a small number of locally managed steps are now subject to diverse links covering both the movement of data and operational processing. ... Third, the complexity of the software processing the data is also increasing, as more and more intricate and complicated systems interact to manage and control the relevant operations. ... From a supply chain risk management perspective, these three forces mean that risks to the resilience of operational delivery of all kinds—not just telecommunications services—have slowly and inexorably increased with the evolution of cloud computing. And arguably, these chains are at their most vulnerable when updates are made to software at any point along the chain. As there isn’t a test system mirroring the full scope of operations for these complex services to provide reassurance that nothing will go wrong, service outages from this source will inevitably both increase and impose their full costs in real time in the real world 

Daily Tech Digest - September 26, 2025


Quote for the day:

“You may be disappointed if you fail, but you are doomed if you don’t try.” -- Beverly Sills



Moving Beyond Compliance to True Resilience

Organisations that treat compliance as the finish line are missing the bigger picture. Compliance frameworks such as HIPAA, GDPR, and PCI-DSS provide critical guidelines, but they are not designed to cover the full spectrum of evolving cyber threats. Cybercriminals today use AI-driven reconnaissance, deepfake impersonations, and polymorphic phishing techniques to bypass traditional defences. Meanwhile, businesses face growing attack surfaces from hybrid work models and interconnected systems. A lack of leadership commitment, underfunded security programs, and inadequate employee training exacerbate the problem. ... Building resilience requires more than reactive policies, it calls for layered, proactive defence mechanisms such as threat intelligence, endpoint detection and response (EDR), and intrusion prevention systems (IPS). These are essential in identifying and stopping threats before they can cause damage which should be at the front line of defence. Ultimately reducing exposure and giving teams the visibility they need to act swiftly. ... True cyber resilience means moving beyond regulatory compliance to develop strategic capabilities that protect against, respond to, and recover from evolving threats. This includes implementing both offensive and defensive security layers, such as penetration testing and real-time intrusion prevention, to identify weaknesses before attackers do.


Architecture Debt vs Technical Debt: Why Companies Confuse Them and What It Costs Business

The contrast is clear: technical debt reflects inefficiencies at the system level — poorly structured code, outdated infrastructure, or quick fixes that pile up over time. Architecture debt emerges at the enterprise level — structural weaknesses across applications, data, and processes that manifest as duplication, fragmentation, and misalignment. One constrains IT efficiency; the other constrains business competitiveness. Recognizing this difference is the first step toward making the right strategic investments. ... The difference lies in visibility: technical debt is tangible for developers, showing up in unstable code, infrastructure issues, and delayed releases. Architecture debt, by contrast, hides in organizational complexity: duplicated platforms, fragmented data, and misaligned processes. When CIOs and business leaders hear the word “debt,” they often assume it refers to the same challenge. It does not. ... Recognizing this distinction is critical because it determines where investments should be made. Addressing technical debt improves efficiency within systems; addressing architecture debt strengthens the foundations of the enterprise. One enables smoother operations, while the other ensures long-term competitiveness and resilience. Leaders who fail to separate the two-risk solving local problems while leaving the structural weaknesses that undermine the organization’s future unchallenged.


Data Fitness in the Age of Emerging Privacy Regulations

Enter the concept of Data Fitness: a multidimensional measure of how well data aligns with privacy principles, business objectives, and operational resilience. Much like physical fitness, data fitness is not a one-time achievement but a continuous discipline. Data fitness is not just about having high-quality data, but also about ensuring that data is managed in a way that is compliant, secure, and aligned with business objectives. ... The emerging privacy regulations have also introduced a new layer of complexity to data management. They shift the focus from simply collecting and monetizing data to a more responsible and transparent approach, which call for sweeping review and redesign of all applications and processes that handles data. ... The days of storing customer data forever are over. New regulations often specify that personal data can only be retained for as long as it's needed for the purpose for which it was collected. This requires companies to implement robust data lifecycle management and automated deletion policies. ... Data privacy isn't just an IT or legal issue; it's a shared responsibility. Organizations must educate and train all employees on the importance of data protection and the specific policies they need to follow. A strong privacy culture can be a competitive advantage, building customer trust and loyalty. ... It's no longer just about leveraging data for profit; it's about being a responsible steward of personal information. 


Independent Management of Cloud Secrets

An independent approach to NHI management can empower DevOps teams by automating the lifecycle of secrets and identities, thus ensuring that security doesn’t compromise speed or agility. By embedding secrets management into the development pipeline, teams can preemptively address potential overlaps and misconfigurations, as highlighted in the resource on common secrets security misconfigurations. Moreover, NHIs’ automation capabilities can assist DevOps enterprises in meeting regulatory audit requirements without derailing their agile processes. This harmonious blend of compliance and agility allows for a framework that effectively bridges the gap between speed and security. ... Automation of NHI lifecycle processes not only saves time but also fortifies systems by means of stringent access control. This is critical in large-scale cloud deployments, automated renewal and revocation of secrets ensure uninterrupted and secure operations. More insightful strategies can be explored in Secrets Security Management During Development. ... While the integration of systems provides comprehensive security benefits, there is an inherent risk in over-relying on interconnected solutions. Enterprises need a balanced approach that allows for collaboration between systems without compromising individual segment vulnerabilities. A delicate balance is found by maintaining independent secrets management systems, which operate cohesively but remain distinct from operational systems. 


Why cloud repatriation is back on the CIO agenda

Cost pressure often stems from workload shape. Steady, always-on services do not benefit from pay-as-you-go pricing. Rightsizing, reservations and architecture optimization will often close the gap, yet some services still carry a higher unit cost when they remain in public cloud. A placement change then becomes a sensible option. Three observations support a measurement-first approach. Many organizations report that managing cloud spend is their top challenge; egress fees and associated patterns affect a growing share of firms, and the finops community places unit economics and allocation at the centre of cost accountability. ... Public cloud remains viable for many regulated workloads, assisted by sovereign configurations. Examples include the AWS European Sovereign Cloud (scheduled to be released at the end of 2025), the Microsoft EU Data Boundary and Google’s sovereign controls and partner offerings. These options have scope limits that should be assessed during design. Public cloud remains viable for many regulated workloads when sovereign configurations meet requirements. ... Repatriation tends to underperform where workloads are inherently elastic or seasonal, where high-value managed services would need to be replicated at significant opportunity cost, where the organization lacks the run maturity for private platforms, or where the cost issues relate primarily to tagging, idle resources or discount coverage that a FinOps reset can address.


Colocation meets regulation

While there have been many instances of behind-the-meter agreements in the data center sector, the AWS-Talen agreement differed in both scale and choice of energy. Unlike previous instances, often utilizing onsite renewables, the AWS deal involved a regional key generation asset, which provides consistent and reliable power to the grid. As a result, to secure the go-ahead, PJM Interconnection, the regional transmission operator in charge of the utility services in the state, had to apply for an amendment to the plant's existing Interconnection Service Agreement (ISA), permitting the increased power supply. However, rather than the swift approval the companies hoped for, two major utilities that operate in the region, Exelon and American Electric Power (AEP), vehemently opposed the amended ISA, submitting a formal objection to its provisions. ... Since the rejection by FERC, Talen and AWS have reimagined the agreement, with it moving from behind to an in-front-of-the-meter arrangement. The 17-year PPA will see Talen supply AWS with 1.92GW of power, ramped up over the next seven years, with the power provided through PJM. This reflects a broader move within the sector, with both Talen and nuclear energy generator Constellation indicating their intention to focus on grid-based arrangements going forward. Despite this, Phillips still believes that under the correct circumstances, colocation can be a powerful tool, especially for AI and hyperscale cloud deployments seeking to scale quickly.


Employees learn nothing from phishing security training, and this is why

Phishing training programs are a popular tactic aimed at reducing the risk of a successful phishing attack. They may be performed annually or over time, and typically, employees will be asked to watch and learn from instructional materials. They may also receive fake phishing emails sent by a training partner over time, and if they click on suspicious links within them, these failures to spot a phishing email are recorded. ... "Taken together, our results suggest that anti-phishing training programs, in their current and commonly deployed forms, are unlikely to offer significant practical value in reducing phishing risks," the researchers said. According to the researchers, a lack of engagement in modern cybersecurity training programs is to blame, with engagement rates often recorded as less than a minute or none at all. When there is no engagement with learning materials, it's unsurprising that there is no impact. ... To combat this problem, the team suggests that, for a better return on investment in phishing protection, a pivot to more technical help could work. For example, imposing two or multi-factor authentication (2FA/MFA) on endpoint devices, and enforcing credential sharing and use on only trusted domains. That's not to say that phishing programs don't have a place in the corporate world. We should also go back to the basics of engaging learners. 


SOC teams face 51-second breach reality—Manual response times are officially dead

When it takes just 51 seconds for attackers to breach and move laterally, SOC teams need more help. ... Most SOC teams first aim to extend ROI from existing operations investments. Gartner's 2025 Hype Cycle for Security Operations notes that organizations want more value from current tools while enhancing them with AI to handle an expansive threat landscape. William Blair & Company's Sept. 18 note on CrowdStrike predicts that "agentic AI potentially represents a 100x opportunity in terms of the number of assets to secure," with TAM projected to grow from $140 billion this year to $300 billion by 2030. ... Kurtz's observation reflects concerns among SOC leaders and CISOs across industries. VentureBeat sees enterprises experimenting with differentiated architectures to solve governance challenges. Shlomo Kramer, co-founder and CEO of Cato Networks, offered a complementary view in a VentureBeat interview: "Cato uses AI extensively… But AI alone can't solve the range of problems facing IT teams. The right architecture is important both for gathering the data needed to drive AI engines, but also to tackle challenges like agility, connecting enterprise edges, and user experience." Kramer added, "Good AI starts with good data. Cato logs petabytes weekly, capturing metadata from every transaction across the SASE Cloud Platform. We enrich that data lake with hundreds of threat feeds, enabling threat hunting, anomaly detection, and network degradation detection."


Timeless inclusive design techniques for a world of agentic AI

Progressive enhancement and inclusive design allow us to design for as many users as possible. They are core components of user-centered design. The word "user" often hides the complex magnificence of the human being using your product, in all their beautiful diversity. And it’s this rich diversity that makes inclusive design so important. We are all different, and use things differently. While you enjoy that sense of marvel at the richness and wonder of your users' lives, there is no need to feel it for AI agents. These agents are essentially just super-charged "stochastic parrots" (to borrow a phrase from esteemed AI ethicist and professor of Computational Linguistics Emily M. Bender) guessing the next token. ... Every breakthrough since we learnt to make fire has been built on what came before. Isaac Newton said he could only see so far because he was "standing on the shoulders of giants". The techniques and approaches needed to enable this new wave of agent-powered AI devices have been around for a long time. But they haven't always been used. In our desire to ship the shiniest features, we often forget to make our products work for people who rely on accessibility features. ... Patterns are things like adding a "skip to content link" and implementing form validation in a way that makes it easier to recover from errors. Alongside patterns, there are a wealth of freely available accessibility testing tools that can tell you if your product is meeting necessary standards.


Stronger Resilience Starts with Better Dependency Mapping

As recent disruptions made painfully clear, you cannot manage what you cannot see. When a single upstream failure ripples through eligibility checks, billing, scheduling, or clinical systems, executives need answers in minutes, not months. Who is impacted? What services are degraded? Which applications are truly critical? What are our fourth-party exposures? In too many organizations, those answers require a scavenger hunt. ... Modern operations rely on external platforms for authorizations, payments, data enrichment, analytics, and communications, yet many organizations stop their mapping at the data center boundary. That blind spot creates serious risk, since a single vendor outage can ripple across multiple critical services. Regulators are responding. In the U.S., the OCC, Federal Reserve, and FDIC’s 2023 Interagency Guidance on Third-Party Risk Management requires banks to identify and monitor critical vendor relationships, including subcontractors and concentration risks. ... Dependency data without impact data is trivia. Mapping is only valuable when assets and services are tied to business impact analysis (BIA) outputs like recovery time objectives and maximum tolerable downtime. Without this, leaders face a flat picture of connections but no way to prioritize what to restore first, or how long they can operate without a service before consequences cascade.

Daily Tech Digest - September 24, 2025


Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe


Managing Technical Debt the Right Way

Here’s the uncomfortable truth: most executives don’t care about technical purity, but they do care about value leakage. If your team can’t deliver new features fast enough, if outages are too frequent, if security holes are piling up, that is financial debt—just wearing a hoodie instead of a suit. The BTABoK approach is to make debt visible in the same way accountants handle real liabilities. Use canvases, views, and roadmaps to connect the hidden cost of debt to business outcomes. Translate debt into velocity lost, time to market, and risk exposure. Then prioritize it just like any other investment. ... If your architects can’t tie debt decisions to value, risk, and strategy, then they’re not yet professionals. Training and certification are not about passing an exam. They are about proving you can handle debt like a surgeon handles risk—deliberately, transparently, and with the trust of society. ... Let’s not sugarcoat it: some executives will always see debt as “nerd whining.” But when you put it into the lifecycle, into the transformation plan, and onto the balance sheet, it becomes a business issue. This is the same lesson learned in finance: debt can be a powerful tool if managed, or a silent killer if ignored. BTABoK doesn’t give you magic bullets. It gives you a discipline and a language to make debt a first-class concern in architectural practice. The rest is courage—the courage to say no to shortcuts that aren’t really shortcuts, to show leadership the cost of delay, and to treat architectural decisions with the seriousness they deserve.


How National AI Clouds Undermine Democracy

The rapid spread of sovereign AI clouds unintentionally creates a new form of unchecked power. It combines state authority with corporate technology in unclear public-private partnerships. This combination centralizes surveillance and decision-making power, extending far beyond effective democratic oversight. The pursuit of national sovereignty undermines the civic sovereignty of individuals. ... The unique and overlooked danger is the rise of a permanent, unelected techno-bureaucracy. Unlike traditional government agencies, these hybrid entities are shielded from democratic pressures. Their technical complexity acts as a barrier against public understanding and journalistic inquiry. ... no sovereign cloud should operate without a corresponding legislative data charter. This charter, passed by the national legislature, must clearly define citizens' rights against algorithmic discrimination, set explicit limits on data use, and create transparent processes for individuals harmed by the system. It should recognize data portability as an essential right, not just a technical feature. ... every sovereign AI initiative should be mandated to serve the public good. These systems must legally demonstrate that they fulfill publicly defined goals, with their performance measured and reported openly. This directs the significant power of AI toward applications that benefit the public, such as enhancing healthcare outcomes or building climate resilience.


IT’s renaissance risks losing steam

IT-enabled value creation will etiolate without the sustained light of stakeholder attention. CIOs need to manage IT signals, symbols, and suppositions with an eye toward recapturing stakeholder headspace. Every IT employee needs to get busy defanging the devouring demons of apathy and ignorance surrounding IT operations today. ... We need to move beyond our “hero on horseback” obsession with single actors. Instead we need to return our efforts forcefully to l’histoire des mentalités — the study of the mental universe of ordinary people. How is l’homme moyen sensual (the man on the street) dealing with the technological choices arrayed before him? ... The IT pundits’ much discussed promise of “technology transformation” will never materialize if appropriate exothermic — i.e., behavior-inducing and energy creating — IT ideas have no mass following among those working at the screens around the world. ... As CIO, have you articulated a clear vision of what you want IT to achieve during your tenure? Have you calmed the anger of unmet expectations, repaired the wounds of system outages, alleviated the doubts about career paths, charted a filled-with-benefits road forward and embodied the hopes of all stakeholders? ... The cognitive elephant in the room that no one appears willing to talk about is the widespread technological illiteracy of the world’s population. 


How One Bad Password Ended a 158-Year-Old Business

KNP's story illustrates a weakness that continues to plague organizations across the globe. Research from Kaspersky analyzing 193 million compromised passwords found that 45% could be cracked by hackers within a minute. And when attackers can simply guess or quickly crack credentials, even the most established businesses become vulnerable. Individual security lapses can have organization-wide consequences that extend far beyond the person who chose "Password123" or left their birthday as their login credential. ... KNP's collapse demonstrates that ransomware attacks create consequences far beyond an immediate financial loss. Seven hundred families lost their primary income source. A company with nearly two centuries of history disappeared overnight. And Northamptonshire's economy lost a significant employer and service provider. For companies that survive ransomware attacks, reputational damage often compounds the initial blow. Organizations face ongoing scrutiny from customers, partners, and regulators who question their security practices. Stakeholders seek accountability for data breaches and operational failures, leading to legal liabilities. ... KNP joins an estimated 19,000 UK businesses that suffered ransomware attacks last year, according to government surveys. High-profile victims have included major retailers like M&S, Co-op, and Harrods, demonstrating that no organization is too large or established to be targeted.


Has the UK’s Cyber Essentials scheme failed?

There are several reasons why larger organisations may steer clear of CE in its current form, explains Kearns. “They typically operate complex, often geographically dispersed networks, where basic technical controls driven by CE do not satisfy organisational appetite to drive down risk and improve resilience,” she says. “The CE control set is also ‘absolute’ and does not allow for the use of compensating controls. Large complex environments, on the other hand, often operate legacy systems that require compensating controls to reduce risk, which prevents compliance with CE.” The point-in-time nature of assessment is also a poor fit for today’s dynamic IT infrastructure and threat environments, argues Pierre Noel, field CISO EMEA at security vendor Expel. ... “For large enterprises with complex IT environments, CE may not be comprehensive enough to address their specific security needs,” says Andy Kays, CEO of MSSP Socura. “Despite these limitations, it still serves a valuable purpose as a baseline, especially for supply chain assurance where larger companies want to ensure their smaller partners have a minimum level of security.” Richard Starnes is an experienced CISO and chair of the WCIT security panel. He agrees that large enterprises should require CE+ certification in their supplier contracts, where it makes sense. “This requirement should also include a contract flow-down to ensure that their suppliers’ downstream partners are also certified,” says Starnes.


Is Your Data Generating Value or Collecting Digital Dust?

Economic uncertainty is prompting many com­panies to think about how to do more with less. But what if they’re actually positioned to do more with more and just don’t realize it? Many organizations already have the resources they need to improve efficiency and resilience in challenging times. Close to two-thirds of organi­zations manage 1 petabyte or more of data, which represents enough data to cover 500 billion standard pages of text. More than 40% of companies store even more data. Much of that data sits unanalyzed while it incurs costs related to collection, compliance, and storage. It also poses data breach risks that require expensive security measures to prevent. ... Engaging with too many apps often makes employees less efficient than they could be. In 2024, companies used an average of 21 apps just for HR tasks. Multiply that across different functions, and it’s easy to see how finding ways to reduce the total could bring down costs. Trimming the number of apps can also increase productivity by reducing employee overwhelm. Constantly switching between different apps and systems has been shown to distract employees while increasing their levels of stress and frustration. Across the orga­nization, switching among tasks and apps consumes 9% of the average employee’s time at work by chipping away at their atten­tion and ability to focus a few seconds at a time with each of the hundreds of tasks switches they perform every day.


The history and future of software development

For any significant piece of software back then, you needed stacks of punch cards. Yes, 1000 lines of code needed 1000 cards. And you needed to have them in order. Now, imagine dropping that stack of 1000 cards! It would take me ages to get them back in order. Devs back then experienced this a lot—so some of them went ahead and had creative ways of indicating the order of these cards. ... y the mid 1970s affordable home computers were starting to become a reality. Instead of a computer just being a work thing, hobbyists started using computers for personal things—maybe we can call these, I don't know...personal computers. ... Assembler and assembly tend to be used interchangeably. But are in reality two different things. Assembly would be the actual language, syntax—instructions being used and would be tightly coupled to the architecture. While the assembler is the piece of software that assembles your assembly code into machine code—the thing your computer knows how to execute. ... What about writing the software? Did they use git back then? No, git only came out in 2005, so back then software version control was quite the manual effort. From developers having their own way of managing source code locally to even having wall charts where developers can "claim" ownership of certain source code files. For those that were able to work on a shared (multi-user) system, or have an early version of some networked storage—Source code sharing was as easy as handing out floppy disks.


Why the operating system is no longer just plumbing

Many enterprises still think of the operating system as a “static” or background layer that doesn’t need active evolution. The reality is that modern operating systems like Red Hat Enterprise Linux (RHEL) are dynamic, intelligent platforms that actively enable and optimize everything running on top of them. Whether you're training AI models, deploying cloud-native applications, or managing edge devices, the OS is making thousands of critical decisions every second about resource allocation, security enforcement, and performance optimization. ... With image mode deployments, zero-downtime updates, and optimized container support, RHEL ensures that even resource-constrained environments can maintain enterprise-grade reliability. We’ve also focused heavily on security—confidential computing, quantum-resistant cryptography, and compliance automation—because edge environments are often exposed to greater risk. These choices allow RHEL to deliver resilience in conditions where compute power, space, and connectivity are limited. ... We don't just take community code and ship it — we validate, harden, and test everything extensively. Red Hat bridges this gap by being an active contributor upstream while serving as an enterprise-grade curator downstream. Our ecosystem partnerships ensure that when new technologies emerge, they work reliably with RHEL from day one.


Ransomware now targeting backups, warns Google’s APAC security chief

Backups often contain sensitive data such as personal information, intellectual property, and financial records. Pereira warned that attackers can use this data as extra leverage or sell it on the dark web. The shift in focus to backup systems underscores how ransomware has become less about disruption and more about business pressure. If an organisation cannot restore its systems independently, it has little choice but to consider paying a ransom. ... Another troubling trend is “cloud-native extortion,” where attackers abuse built-in cloud features, such as encryption or storage snapshots, to hold systems hostage. Pereira explained that many organisations in the region are adapting by shifting to identity-focused security models. “Cloud environments have become the new perimeter, and attackers have been weaponising cloud-native tools,” he said. “We now need to enforce strict cloud security hygiene, such as robust MFA, least privilege access, proactively monitoring of role access changes or credential leaks, using automation to detect and remediate misconfigurations, and anomaly detection tools for cloud activities.” He pointed to rising investments in identity and access management tools, with organisations recognising their role in cutting down the risk of identity-based attacks. For APAC businesses, this means moving away from legacy perimeter defences and embracing cloud-native safeguards that assume breaches are inevitable but limit the damage.


AI Won't Replace Developers, It Will Make the Best Ones Indispensable

The replacement theory assumes AI can work independently, but it can't. Today's AI coding tools don't run themselves, they need active steering. Most AI tools today operate on a "prompt and pray" model: give the AI instructions, get code back, hope it works. That's fine for demos or side projects, but production environments are far less forgiving. ... AI doesn't level the playing field between developers, it widens it. Using AI effectively requires the same skills that make great developers great: understanding system architecture, recognizing security implications, writing maintainable code. ... Tomorrow's junior developers will need to get productive in a different way. Instead of spending months learning basic syntax and patterns, they'll start by learning to collaborate with AI agents effectively. Those who can adapt will find opportunities, and those who can't might struggle to break in. This shift actually creates more demand for senior engineers, because someone needs to train these AI-assisted junior developers, architect systems that can handle AI-generated code at scale, and establish the processes and standards that keep AI tools from creating chaos. ... The teams succeeding with AI coding treat agents like exceptionally capable junior teammates who need oversight. They provide detailed context, review generated code, and test thoroughly before deployment rather than optimizing purely for speed.