Daily Tech Digest - October 16, 2025


Quote for the day:

"Don't wait for the perfect moment take the moment and make it perfect." -- Aryn Kyle



Major network vendors team to advance Ethernet for scale-up AI networking

“AI workloads are re-shaping modern data center architectures, and networking solutions must evolve to meet the growing demands,” wrote Martin Lund, executive vice president of Cisco’s common hardware group, in a blog post about the news. “ESUN brings together AI infrastructure operators and vendors to align on open standards, incorporate best practices, and accelerate innovation in Ethernet solutions for scale-up networking.” ESUN will focus solely on open, standards-based Ethernet switching and framing for scale-up networking—excluding host-side stacks, non-Ethernet protocols, application-layer solutions, and proprietary technologies. The group will expand the development and interoperability of XPU network interfaces and Ethernet switch ASICs for scale-up networks, the OCP stated in a blog: “The Initial focus will be on L2/L3 Ethernet framing and switching, enabling robust, lossless, and error-resilient single-hop and multi-hop topologies.” ... “Scale-Up” AI fabrics (SAIF) provide high-bandwidth, low-latency physical network interconnectivity and enhanced memory interaction between nearby AI processors,” Garter wrote. “Current implementations of SAIF are vendor-proprietary platforms, and there are proximity limitations (typically, SAIF is confined to only a rack or row). In most scenarios, Gartner recommends using Ethernet when connecting multiple SAIF systems together. We believe the scale, performance and supportability of Ethernet is optimal.”


Moving Beyond Awareness: How Threat Hunting Builds Readiness

The best defense begins before the first alert. Proactive threat hunting identifies the conditions that allow an attack to form and addresses them early. It moves security from passive observation to a clear understanding of where exposure originates. This move from observation to proactive understanding forms the core of a modern security program: Continuous Threat Exposure Management (CTEM). Instead of a one-time project, a CTEM program provides a structured, repeatable framework to continuously model threats, validate controls, and secure the business. For organizations ready to build this capability, A Practical Guide to Getting Started With CTEM offers a clear roadmap. ... Security Awareness Month reminds us that awareness is an essential step. Yet real progress begins when awareness leads to action. Awareness is only as powerful as the systems that measure and validate it. Proactive threat hunting turns awareness into readiness by keeping attention fixed on what matters most - the weak points that form the basis for tomorrow's attacks. Awareness teaches people to see risk. Threat hunting proves whether the risk still exists. Together they form a continuous cycle that keeps security viable long after awareness campaigns end. This October, the question for every organization is not how many employees completed the training, but how confident you are that your defenses would hold today if someone tested them. Awareness builds understanding. Readiness delivers protection.


Beyond the checklist: Building adaptive GRC frameworks for agentic AI

We must move GRC governance from a periodic, human-driven activity to an adaptive, continuous and context-aware operational capability embedded directly within the agentic AI platform. The first critical step involves implementing real-time governance and telemetry. This means we stop relying solely on endpoint logs that only tell us what the agent did and instead focus on integrating monitoring into the agent’s operating environment to capture why and how. ... The RCV is a structured, cryptographic record of the factors that drove the agent’s choice. It includes not just the data inputs, but also the specific model parameters, the weighted objectives used at that moment, the counterfactuals considered and, crucially, the specific GRC constraints the agent accessed and applied during its deliberation. ... Finally, we must address the “big red button” problem inherent in human-in-the-loop override. For agentic AI, this button cannot be a simple off switch, which would halt critical operations and cause massive disruption. The override must be non-obstructive and highly contextual, as detailed in OECD Principles on AI: Accountability and human oversight. ... We are entering an era where our systems will act on our behalf with little or no human intervention. My priority — and yours — must be to ensure that the autonomy of the AI does not translate into an absence of accountability.


Beyond Productivity: AI’s Role in Creating Hyper-Personalized and Inclusive Employee Experiences

Generative AI enhances employee experiences by analyzing unstructured information, understanding natural language and interpreting intent. Agentic AI takes this further by acting as a centralized, intelligent interface – integrating data sources, maintaining contextual awareness, adapting to individual goals and autonomously executing tasks – minimizing the need for employees to navigate multiple systems or support channels. From onboarding to learning, wellness, feedback, and career progression, it provides a seamless connected experience. Furthermore, AI systems can continuously learn from an employee’s behavior, preferences, and goals to provide real-time, tailored experiences. ... As powerful as AI is, it’s success in employee experience hinges on how well it aligns with human-centric values. Personalization must never feel intrusive, and inclusivity efforts must be grounded in empathy, transparency, and consent. Enterprises must adopt a responsible AI approach – ensuring fairness, explainability, and ethical data use. Employees should have clarity on how AI systems work, how data is used, and how decisions are made. Moreover, they should always have the option to challenge or override AI-driven outcomes. Leadership, HR, and IT teams must work together to create governance frameworks that reinforce trust – because even the most advanced AI fails if employees don’t feel seen, respected, and safe.


5 ideas to help bridge the genAI skills gap

Instead of focusing narrowly on technical skills, UST has shifted its training toward cultivating adaptable mindsets. “We want to develop curiosity, critical thinking, and creativity — skills that aren’t easily replaced by AI,” said Prasad, stressing that traditional classroom-style learning is insufficient when the competitive environment demands experimentation and rapid application. Employees are given access to a range of AI tools such as GitHub Copilot, Google Gemini, and Cursor, and encouraged to experiment safely in R&D environments. ... Rather than pulling people out of their daily job for separate training sessions, the company embeds training directly into daily workflows at the points where people are likely to be confronted with the need for learning material. Digital adoption platforms like Whatfix provide in-system nudges and tips directly in the tools recruiters use, guiding them in real time. Recruiting system training is integrated within the application. Users don’t know they’re interacting with a digital coach that’s training them to use the system and its AI features, such as candidate sourcing, resume analysis, and client outreach, effectively. According to Busch, the payoff is measurable: “How-to” support questions have been reduced 95% since implementing workflow learning.


Digital transformation works best when co-owned — but only if you do it right

All too often, the CIO has gone in alone to the CFO, CEO, or board to argue the benefits of a digital project in order to obtain funding. A sounder approach is to confirm the need for a digital solution to a particular business problem with the CxO in charge of that business area, and to then go in together to the budget meeting so that both the technology and the business values can be effectively presented. Secondly, there is no reason the IT budget must bear the full costs of a co-owned project. ... A first step for CxOs and CIOs toward a new, unified value creation paradigm is to root out the historical roadblocks that stand in the way of executive cooperation. CxOs must fully engage in digital projects from start to finish, and CIOs must be willing to accept co-star (instead of star) billing in projects. Most CIOs are making this shift in thinking, but CxOs still lag in project participation. Second, CIOs must gain CxO hard-dollar budget commitments for digital projects. When both co-fund and advocate for digital projects in front of the board, CEO, and CFO, both have skin in the game. Third, co-assign executive leadership responsibilities for key project milestones. The CxO might be responsible for defining the business use case and what a specific digital solution must deliver, while the CIO might be responsible for developing the solution.


Australian legislators spar with platforms, each other over age assurance laws

If there’s one thing every platform can agree on when it comes to age assurance, it’s that biometric age verification measures are a good idea – but probably just not for them. The latest to suggest that maybe they aren’t subject to the law are TikTok and Snapchat. The companies have reportedly made the case to Australia’s eSafety Commissioner that there are potential legal workarounds to Australia’s incoming social media regulations, which will prohibit users under 16 from having accounts. ... “We’re doing these things, ultimately, for the good of young people in Australia. It will span television, radio, digital. There will be some on billboards near schools around the country. They’ll see it on TV. They’ll see it online. They’ll see it, ironically, on social media, because until the 10th of December, it is legal for kids to be on social media. And if that’s where they are, that’s where we need to talk to them about what this means and why we’re doing it.” ... There is, in questioning from Senator David Shoebridge of the Australian Greens, an apparent desire to assign blame to age verification providers. He argues that Australia’s privacy laws aren’t yet ready to accommodate such data collection, in that Australia’s 1988 Privacy Act doesn’t include requirements for the deletion of data. He asks about workarounds, like masks and VPNs.


5 Must-Follow Rules of Every Elite SOC: CISO’s Checklist

Even the best analysts can’t detect everything alone. When communication breaks down and teams work in silos, critical context slips away; alerts are missed, work gets repeated, and investigations slow to a crawl. That’s why collaboration has become a core part of modern SOC performance. Inside the ANY.RUN sandbox, the Teamwork feature lets analysts join the same live workspace, share results in real time, and coordinate across roles without switching tools. Team leads can assign tasks, monitor progress, and track productivity; all from a single interface that keeps the team aligned, no matter the time zone. ... Every SOC knows the feeling; too many alerts, too many clicks, not enough time. Analysts lose hours on repetitive actions: opening files, running scripts, clicking through pop-ups, or solving CAPTCHAs just to trigger hidden payloads. With Automated Interactivity inside the ANY.RUN sandbox, all those steps happen automatically. The system opens malicious links hidden behind QR codes, interacts with fake installers, solves CAPTCHAs, and performs other routine actions; no human input needed. The sandbox handles these interactions on its own, exposing every stage of the attack chain in a fraction of the time. ... Even the best detection tools miss things. False negatives happen all the time; a file marked “safe” can still hide malicious behavior deep in its code or trigger only under specific conditions.


Identifying risky candidates: Practical steps for security leaders

Today’s fraudsters and malicious insiders often leave digital breadcrumbs outside a traditional organization’s direct visibility. Hiring teams cannot connect those breadcrumbs on their own, and they should partner with the security team to surface hidden affiliations, past fraudulent activities, or concerning behavioral patterns as a part of the overall candidate assessment. ... Outside-the-firewall checks are especially important in a remote or hybrid work environment where face-to-face verification is limited. The practical takeaway is that companies need to broaden their visibility: the more you combine traditional HR processes with external digital risk signals and collaborate across internal teams, the harder it becomes for a fraudulent candidate to work within your company undetected. ... Employees under stress or facing job insecurity may become more prone to misconduct, either through negligence or malice. Those with declining performance reviews, who are facing disciplinary action, or that have presented resistance to security upgrades are worth closer scrutiny. Employees that give notice of resignation should be keenly watched for unauthorized activity. ... The definition of insider threat is shifting. Where once the focus was on accidental misconfigurations or negligence, today it increasingly includes malicious acts, fraud, and hybrid cases where dissatisfaction or personal pressures drive risky behavior.


CISO Conversations: Are Microsoft’s Deputy CISOs a Signpost to the Future?

Microsoft may be unique in its size and complexity. But the difficulties faced by its CISO, Igor Tsyganskiy, are the same as those faced by all CISOs – just writ much larger. The expansion of the CISO role from governance (security), to include compliance (legal), internal app and external product development (engineering), integration with business leaders (business knowledge and communication skills), artificial intelligence (data scientist) and more, implies the solution adopted Tsyganskiy should be considered by all CISOs. ... It is encouraging that both top Microsoft dCISOs believe that such career success can be achieved by anyone with the right attitude. “Personally, I like to understand technology to a deep level. But it isn’t absolutely essential,” explains Russinovich. “You can delegate things, just like Igor is delegating his need for deep understanding of everything to a pool of dCISOs. Some level of technical understanding will always be crucial, because otherwise you’re just completely disconnected. But I think you can be an effective CISO without being as technically deep as I personally like to be.” Johnson agrees that you can have a successful career in cyber without prior cyber qualifications. “You need to have the aptitude. You need to be willing to learn every day. You need to be willing to accept what you don’t know, and you need to network,” she says.

Daily Tech Digest - October 15, 2025


Quote for the day:

"Blessed are those who can give without remembering and take without forgetting." -- Anonymous



One Leader, Two Roles: The CISO-DPO Hybrid Model

The convergence is not without its challenges. The breadth of combined responsibilities could potentially lead to overload and burnout for leaders trying to keep pace with evolving technical threats and fast-changing privacy regulations. In addition, lapses in compliance could lead to hefty penalties for the organization, particularly as regulatory bodies are now penalizing CISOs for faltering in their compliance and reporting efforts - a reminder that continuous learning is not optional, but essential. This hybrid role requires people who are multi-skilled and knowledgeable in both domains, a seemingly daunting task. CISOs and DPOs must be viewed as closely associated partners - not as individuals who can cause a conflict of interest - in their compliance journey.  ... A hybrid role enables faster translation of regulatory requirements into security controls, resulting in accelerated compliance efforts and improved resilience overall. An integrated approach thus becomes far more efficient than individuals operating in silos, such as the DPO having to rely on a CISO who does not necessarily have a DPO-specific mandate but only an overarching security focus. Enterprises can create an ecosystem where security and privacy reinforce each other, and organizations can foster collaboration, and build trust and long-term value in an era of relentless digital risk.


Beyond the Black Box: Building Trust and Governance in the Age of AI

Without enough controls, organizations run the risk of being sanctioned by regulators, losing their reputation, or facing adverse impacts on people and communities. These threats can be managed only by an agile, collaborative AI governance model that prioritizes fairness, accountability, and human rights. ... Organizations must therefore strike a balance between openness and accountability, holding back to protect sensitive assets. This can be achieved by constructing systems that can explain their decisions clearly, keeping track of how models are trained, and making decisions using personal or sensitive data interpretable. ... Methods like adversarial debiasing, sample reweighting, and human evaluators assist in fixing errors prior to their amplification, making sure the results reflect values like justice, equity, and inclusion. ... Privacy-enhancing technologies (PETs) promote the protection of personal data while enabling responsible usage. For example, differential privacy adds a touch of statistical “noise” to keep individual identities hidden. Federated learning enables AI models to learn from data distributed across multiple devices, without needing access to the raw data. ... Compliance must be embedded in the AI lifecycle by means of impact assessments, documentation, and control scaling, especially for high‑risk applications like biometric identification or automated decision‑making.


The rise of purpose-built clouds

The rise of purpose-built clouds is also driving multicloud strategies. Historically, many enterprises have avoided multicloud deployments, citing complexity in managing multiple platforms, compliance challenges, and security concerns. However, as the need for specialized solutions grows, businesses are realizing that a single vendor can’t meet their workload demands. ... Another major reason for purpose-built clouds is data residency and compliance. As regional rules like those in the European Union become stricter, organizations may find that general cloud platforms can create compliance issues. Purpose-built clouds can provide localized options, allowing companies to host workloads on infrastructure that satisfies regulatory standards without losing performance. This is especially critical for industries such as healthcare and financial services that must adhere to strict compliance standards. Purpose-built platforms enable companies to store data locally for compliance reasons and enhance workloads with features such as fraud detection, regulatory reporting, and AI-powered diagnostics. ... The rise of purpose-built clouds signals a philosophical shift in enterprise IT strategies. Instead of generic, one-size-fits-all solutions, organizations now recognize the value in tailoring investments to align directly with business objectives. 


Establishing Visibility and Governance for Your Software Supply Chain

Even if your organization isn’t the direct target, you can fall victim to attackers. A supply chain attack designed to gain access to a bank, for example, could also poison your supply chain. The attackers will gladly take your customer information or hold your servers hostage to ransomware. Modern software supply chains are incredibly complex webs of third-party code. To properly secure the supply chain, organizations must first gain visibility into all of the components that go into their applications. This is necessary not just on a per-application basis, but across the entire portfolio. ... The first step is to start building software bills of materials (SBOMs) at build time. The SBOM records what goes into your software, so it is the foundational piece of building asset visibility. You can then use that information to build a knowledge graph about your supply chain, including vulnerabilities and software licenses. When you aggregate these SBOMs across all of your application portfolio, you get a holistic view of all dependencies. ... One final piece of the puzzle is tracking software provenance. Tracking and gating on software provenance gives you another avenue to protect yourself from vulnerable code. This is often overlooked, but given the prevalence of attacks against open source library repositories, it’s more important than ever. 


Avoiding chain of custody crisis: In-house destruction for audit-proof compliance

Chain of custody refers to the documented and unbroken trail of accountability that records the lifecycle of a sensitive asset; from creation and use to final destruction. For data stored on physical media like hard disk drives (HDDs), solid state drives (SSDs), or e-media, maintaining a secure and traceable chain of custody is essential for demonstrating regulatory compliance and ensuring operational integrity. ... With the right high-security equipment, such as NSA-listed paper shredders, hard drive crushers and shredders, and disintegrators, destruction can occur at the point of use – or at least within the facility – under supervision and with real-time documentation. This eliminates transport risks, reduces reliance on third parties, and keeps sensitive data within your organization’s security perimeter. ... Compliance auditors are increasingly looking beyond destruction certificates. They want transparency. That means policies, procedures, logs, and physical proof. With an in-house program, organizations can tailor destruction workflows to meet specific regulatory frameworks, from NIST 800-88 guidelines to DoD or ISO standards. ... High-security data destruction isn’t just about preventing breaches. It’s about instilling confidence both internally with leadership and stakeholders, and externally with regulators and clients. By keeping destruction in-house, organizations send a clear message: data security is non-negotiable.


If Architectures Could Talk, They’d Quote Your Boss

Architecture doesn’t fail in the codebase. It fails in the meeting rooms. In the handoffs. In the silences between teams who don’t talk — or worse, assume they understand each other. The real complexity lives between the lines — not of code, but of communication. And once we stop pretending otherwise, we begin to see that the technical is inseparable from the social. ... There’s a deep irony in the fact that many of us in software come from a binary world — one shaped by certainty, logic, and repeatability. We’re trained to seek out 1s and 0s, true or false, compile or fail. But architecture lives in the fog — in uncertainty, trade-offs, and risk. It’s not a world of 1s and 0s, but of shifting constraints and grey zones. Where engineers long for clarity, architecture demands comfort with ambiguity. Decisions rarely have a single correct answer — they have consequences, compromises, and contexts that evolve over time. It’s a game of incomplete information, where clarity emerges only through conversation, alignment, and compromise. This also explains why so many of our colleagues feel frustrated. Requirements change. Priorities shift. Stakeholders contradict each other. And it’s tempting to see all that as failure. But it’s not failure — it’s the environment. It’s how complex systems grow. Architecture isn’t about eliminating uncertainty. It’s about giving teams just enough structure to move within it with confidence.


CIOs’ AI confidence yet to match results

According to a new survey from AIOps observability provider Riverbed, 88% of technical specialists and business and IT leaders believe their organizations will make good on their AI expectations, despite only 12% currently having AI in enterprise-wide production. Moreover, just one in 10 AI projects have been fully deployed, respondents say, suggesting that enthusiasm is significantly outpacing the ability to deliver. ... One problem with IT leaders’ possible overconfidence about AI expectations is that most organizations have no concrete expectations to begin with, says Warren Wilbee, CTO of supply chain software provider ToolsGroup. “Are the expectations a 10% productivity again, or a 2% drop in staffing?” he says. “The expectations are ill-defined.” Other AI experts see AI enthusiasm outpacing the difficulties of deploying the technology. In many cases, company leaders underestimate the technology requirements and the compliance and governance demands, says Patrizia Bertini, managing partner at UK IT regulatory advisory firm Aligned Consulting Group. ... Many organizations’ leaders don’t understand the full implications of rolling out and using AI, he says, with many not realizing the extent to which the technology will change the nature of work. Instead of executing tasks, many employees will manage agents that complete those tasks — a seismic shift. “Agentic AI holds enormous potential, but the path to full deployment will take time, requiring effort and investment,” he says. 


What if your privacy tools could learn as they go?

The research explains that traditional local differential privacy methods tend to be conservative because they assume no knowledge about the data. This leads to adding more noise than needed, which harms data utility. The PML approach narrows that gap by making use of whatever knowledge can be safely derived from the data itself. This design shift resonates with challenges seen in industry. ... Beyond the case studies, the research provides a set of mathematical results that can be applied to other privacy settings. It shows how to compute optimal mechanisms under uncertainty, including closed-form solutions for simple binary data and a convex optimization program for more complex datasets. These results mean that privacy engineers could, in theory, design systems that automatically adjust to the available data. The framework explains how to choose privacy parameters to meet a desired balance between protection and accuracy, given a known probability of error. ... This research offers a way to improve one of the biggest tradeoffs in privacy engineering: the loss of utility caused by assuming no prior knowledge about the data-generating process. By allowing systems to safely incorporate limited, empirically derived information, it becomes possible to provide strong privacy guarantees while preserving more data usefulness. The findings also suggest that privacy guarantees do not have to come at such a steep cost to data utility. 


13 cybersecurity myths organizations need to stop believing

Big tech platforms have strong verification that prevents impersonation - Some of the largest tech platforms like to talk about their strong identity checks as a way to stop impersonation. But looking good on paper is one thing, and holding up to the promise in the real world is another. “The truth is that even advanced verification processes can be easily bypassed,” says Ben Colman ... Buying more tools can bolster cybersecurity protection - One of the biggest traps businesses fall into is the assumption that they need more tools and platforms to protect themselves. And once they have those tools, they think they are safe. Organizations are lured into buying products “touted as the silver-bullet solution,” says Ian McShane. “This definitely isn’t the key to success.” Buying more tools doesn’t necessarily improve security because they often don’t have a tools problem but an operational one. ... Hiring more people will solve the cybersecurity problem - Professionals who are truly talented and dedicated to security are not that easy to find. So instead of searching for people to hire, businesses should prioritize retaining their cybersecurity professionals. They should invest in them and offer them the chance to gain new skills. “It is better to have a smaller group of highly trained IT professionals to keep an organization safe from cyber threats and attacks, rather than a disparate larger group that isn’t equipped with the right skills,” says McShane.


Where Stale Data Hides Inside Your Architecture (and How to Spot It)

Every system collects stale data over time — that part is obvious. What’s less obvious is how much of it your platform will accumulate and, more importantly, whether it builds up in places it never should. That’s no longer just an operational issue but an architectural one. ... Stale data often hides not in caching itself but in the gaps between cache layers. When application, storefront, and CDN caches don’t align, the system starts serving conflicting versions of the truth, like outdated prices or mismatched product images. ... A clear warning sign that your cache may hide stale data is when problems vanish after cache purges, only to return later. It often means the layers are competing rather than cooperating. ... One of the heaviest anchors for enterprise systems is transactional history that stays in production far longer than it should. Databases are built to serve current workloads, not to carry the full weight of years of completed orders and returns. ... Integrations with legacy systems often look stable because they “just work.” The trouble is that over time, those connections become blind spots. Data is passed along through brittle transformations, copied into staging tables, or synchronized with outdated protocols. ... Preventing stale data requires making freshness an architectural principle. It often starts with centralized cache management, because without a single policy for invalidation and refresh, caches across layers will drift apart.

Daily Tech Digest - October 14, 2025


Quote for the day:

"What you get by achieving your goals is not as important as what you become by achieving your goals." -- Zig Ziglar


Know your ops: Why all ops lead back to devops

When you see more terms that include the “ops” suffix, you should understand them as ideas that, as Graham Krizek, CEO of Voltage, puts it, “represent different layers of the same overarching goal. These concepts are not isolated silos but overlapping practices that support automation, collaboration, and scalability.” ... While site reliability engineering (SRE) and infrastructure as code (IaC) don’t have “ops” attached to their names, they can be seen in many ways as offshoots of the devops movement. SRE applies software engineering techniques to operations problems, with an emphasis on service-level objectives and error budgets. IaC shops manage and provision infrastructure using machine-readable definition files and scripts that can be version-controlled, automated, and tested just like application code. IaC underpins devops, gitops, and many specialized ops practices. ... “While it is not necessary for every IT professional to master each one individually, understanding the principles behind them is essential for navigating modern infrastructure,” he says. “The focus should remain on creating reliable systems and delivering value, not simply keeping up with new terminology.” In other words: you don’t need to collect ops like trading cards. You need to understand the fundamentals, specialize where it makes sense, and ignore the rest. Start with devops, add security if your compliance requirements demand it, and adopt cloudops practices if you’re heavily in the cloud. 


Digital Trust as a Strategic Asset: Why CISOs Must Think Like CFOs

CFOs are great at framing problems in terms of money. CISOs must also figure out how much risks cost, what not taking action costs, how much revenue loss comes from median dwell time, and how much it will cost to recover. Boards want the truth, not spin. Translate technical metrics into business impact (e.g., how detection/response times and dwell time drive incident scope and recovery costs). Recent threat reports show global median dwell time has fallen to ~10 days, but impact still depends on speed of containment. ... Stop talking about technology. Start describing cybersecurity as keeping your business running, protecting your reputation and building consumer trust – not simply operational disruption, but also how risk scenarios affect P&Ls. ... CISOs need to know how to read trust balance sheets, not simply logs. This entails being able to understand risk economics, insurance models and how to allocate resources strategically. ... We are entering a new era in which CFOs and CISOs are both responsible for keeping the business running: Earnings calls that include integrated trust measures;  Cyber insurance coverage that is in line with active threat modeling; Cyber posture reports that meet regulatory standards, like financial audits; and Shared leadership on risk and value initiatives at the board level. CISOs who understand trust economics will impact the futures of businesses by making security a part of strategy as well as operations.


Five actions for CISOs to manage cloud concentration risks

To effectively mitigate concentration risks, CISOs should start by identifying and documenting both third-party and fourth-party risks, with a focus on the most critical cloud providers. It is important to recognize that some non-cloud products may also have cloud dependencies, such as management consoles or reporting engines. Collaborating closely with strategic procurement and vendor management (SPVM) leaders ensures that each cloud provider has a clearly documented owner who understands their responsibilities. ... CISOs should not rely solely on service level agreements (SLAs) to mitigate financial losses from outages, as SLA payouts are often insufficient. Instead, focus on designing applications to gracefully manage limited failures and use cloud-native resilience patterns. In IaaS and PaaS, focus on short-term failure of some cloud services first, rather than catastrophic failure of a large provider and use cloud-native resilience patterns in your architecture. In addition, special attention should be given to cloud identity providers due to their position as a large single point of failure. ... To reduce the risk associated with single-vendor dependency, organizations should intentionally distribute applications and workloads across at least two cloud providers. While single-vendor solutions can simplify integration and sourcing, a multi-cloud approach limits the potential impact of an issue affecting any one provider.


Your cyber risk problem isn’t tech — it’s architecture

The development of a risk culture — including appetite, tolerance and profile — within the scope of the management program is essential to provide real visibility into ongoing risks, how they are being perceived and mitigated, and to leverage the organization’s ability to improve its security posture. Consequently, the company begins to deliver reliable products to customers, secure its reputation and build a secure image to achieve a competitive advantage and brand recognition. ... Another important factor to be developed in parallel with raising risk culture is the continuous Information security awareness process. This action should include all employees, especially those involved in Incident Management and cyber Resilience. ... From a technical standpoint, it is important to select and implement appropriate controls from the NIST CSF stages: Identify, Protect, Detect, Respond and Recover. However, the selection of each control for building guardrails will depend on the overall cybersecurity big picture and market best practices. For each identified issue, the corresponding control must be determined, each monitored by the three lines of defense ... Finally, the cyber management program must also consider legal, regulatory and regional requirements, including privacy and cybersecurity laws. This covers LGPD, CCPA, GDPR, FFEIC, Central Bank regulations, etc., to understand the consequences of non-compliance, which can pose serious issues for the organization.


Even the best AI agents are thwarted by this protocol - what can be done

An emerging category of artificial intelligence middleware known as Model Context Protocol is meant to make generative AI programs such as chatbots bots more powerful by letting them connect with various resources, including packaged software such as databases. Multiple studies, however, reveal that even the best AI models struggle to use Model Context Protocol. ... Having a standard does not mean that an AI model, whose functionality includes a heavy dose of chance ("probability" in technical terms), will faithfully implement MCP. An AI model plugged into MCP has to generate output that achieves several things, such as formulating a plan to answer a query by choosing which external resources to access, in what order to contact the MCP servers that lead to those external applications, and then structuring several requests for information to produce a final output to answer the query. ... The immediate takeaway from the various benchmarks is that AI models need to adapt to a new epoch in which using MCP is a challenge. AI models may have to evolve in new directions to fulfill the challenge. All three studies identify a problem: Performance degrades as the AI models have to access more MCP servers. The complexity of multiple resources starts to overwhelm even the models that can best plan what steps to take at the outset. As Wu and team put it in their MCPMark paper, the complexity of all those MCP servers strains any AI model's ability to keep track of it all.


Chaos engineering on Google Cloud: Principles, practices, and getting started

A common misconception is that cloud environments automatically provide application resiliency, eliminating the need for testing. Although cloud providers do offer various levels of resiliency and SLAs for their cloud products, these alone do not guarantee that your business applications are protected. If applications are not designed to be fault-tolerant or if they assume constant availability of cloud services, they will fail when a particular cloud service they depend on is not available. ... As a proactive discipline, chaos engineering enables organizations to identify weaknesses in their systems before they lead to significant outages or failures, where a system includes not only the technology components but also the people and processes of an organization. By introducing controlled, real-world disruptions, chaos engineering helps test a system's robustness, recoverability, and fault tolerance. This approach allows teams to uncover potential vulnerabilities, so that systems are better equipped to handle unexpected events and continue functioning smoothly under stress. ... Chaos Toolkit is an open-source framework written in Python that provides a modular architecture where you can plug in other libraries (also known as ‘drivers’) to extend your chaos engineering experiments. ... to enable Google Cloud customers and engineers to introduce chaos testing in their applications, we’ve created a series of Google Cloud-specific chaos engineering recipes. Each recipe covers a specific scenario to introduce chaos in a particular Google Cloud service.


The attack surface you can’t see: Securing your autonomous AI and agentic systems

The deep, non-deterministic nature of the underlying Large Language Models (LLMs) and the complex, multi-step reasoning they perform create systems where key decisions are often unexplainable. When an AI agent performs an unauthorized or destructive action, auditing it becomes nearly impossible. ... When you give an AI agent autonomy and tool access, you create a new class of trusted digital insider. If that agent is compromised, the attacker inherits all its permissions. An autonomous agent, which often has persistent access to critical systems, can be compromised and used to move laterally across the network and escalate privileges. The consequences of this over-permissioning are already being felt. ... The sheer speed and scale of agent autonomy demand a shift from traditional perimeter defense to a Zero Trust model specifically engineered for AI. This is no longer an optional security project; it is an organizational mandate for any leader deploying AI agents at scale. ... Securing Agentic AI is not just about extending your traditional security tools. It requires a new governance framework built for autonomy, not just execution. The complexity of these systems demands a new security playbook focused on control and transparency ... The future of enterprise efficiency is agentic, but the future of enterprise security must be built around controlling that agency. 


Systems that Sustain: Lessons that Nature Never Forgot but We Did

In practice, a major flaw in many technology projects is that existing multi-level approval systems are simply digitalised, leading to only marginal improvements. The process becomes a digital twin of the old: while processing speeds increase, the workflow itself remains long, redundant, and often cumbersome. The introduction of a new digital interface adds to the woes rather than simplifies them. Had processes been genuinely reengineered, digitisation could have saved time by simplifying steps, reducing the training load, improving efficiency, cutting costs, and enabling quicker adaptation in response to change. Another persistent pitfall in public sector digital transformation is misunderstanding the promise of analytics, and more crucially, confusing outputs with outcomes. ... Humans, as players in nature’s game, are unique. Evolution gifted us consciousness, language, memory, and complex social bonds—traits that allowed the creation of technology, law, storytelling, and culture. Yet these very blessings seeded traits antithetical to nature’s raw logic ... Artificial intelligence presents a tantalising prospect. Unlike its human creators, a well-designed AI can, under ideal circumstances, create technologies based on the same bias-free principles that drive nature: redesign for purpose, learn and adapt from data, and commit to real, measurable outcomes. 


California introduces new child safety law aimed at AI chatbots

The law is set to come into effect on Jan. 1, 2026, and requires chatbot operators to implement age verification and warn users of the risks of companion chatbots. The bill implements harsher penalties for anyone profiting from illegal deepfakes, with fines of up to $250,000 per offense. In addition, technology companies must establish protocols that seek to prevent self-harm and suicide. These protocols will have to be shared with the California Department of Health to ensure they’re suitable. Companies will also be required to share statistics on how often their services issue crisis center prevention alerts to their users. Some AI companies have already taken steps to protect children, with OpenAI recently introducing parental controls and content safeguards in ChatGPT, along with a self-harm detection feature. Meanwhile, Character AI has added a disclaimer to its chatbot that reminds users that all chats are generated by AI and fictional. Newsom is no stranger to AI legislation. In September, he signed into law another bill called SB 53, which mandates greater transparency from AI companies. More specifically, it requires AI firms to be fully transparent about the safety protocols they implement, while providing protections for whistleblower employees. The bill means that California is the first U.S. state to require AI chatbots to implement safety protocols, but other states have previously introduced more limited legislation. 


Embedding Security into Enterprise Architecture: A TOGAF-Based Approach to Risk-Aligned Design

Treating security as a separate discipline leads to inefficiencies, redundancies, and vulnerabilities. Bolting on security after systems are designed often results in costly retrofits, fragmented controls, and misaligned priorities. It also creates friction between teams — where security is seen as a blocker rather than a partner. Integrating ESA into EA from the outset changes the dynamic. It ensures that security is considered in every architectural decision — from business processes to data flows, from application design to infrastructure deployment. It aligns security with business goals, reduces risk exposure, and accelerates delivery. ... ISM brings operational rigor to ESA. It defines how security is implemented, monitored, and improved. ISM includes identity and access management, continuity planning, compliance management, and security awareness. When ISM is integrated into EA, security becomes part of the enterprise fabric. It’s not just a set of policies — it’s a way of working. ... This integration is not a technical adjustment — it’s a strategic evolution. It requires collaboration, shared language, and a commitment to embedding security into every architectural decision. When done right, it reduces risk, accelerates delivery, and builds confidence across the enterprise. Security by design is not a luxury — it’s a necessity. And EA Capability is how we make it real.

Daily Tech Digest - October 13, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


Is vibe coding ruining a generation of engineers?

In the era of AI, the traditional journey to coding expertise that has long supported senior developers may be at risk. Easy access to large language models (LLMs) enables junior coders to quickly identify issues in code. While this speeds up software development, it can distance developers from their own work, delaying the growth of core problem-solving skills. As a result, they may avoid the focused, sometimes uncomfortable hours required to build expertise and progress on the path to becoming successful senior developers. ... The increasing availability of these tools from Anthropic, Microsoft and others may reduce opportunities for coders to refine and deepen their skills. Rather than “banging their heads against the wall” to debug a few lines or select a library to unlock new features, junior developers may simply turn to AI for an assist. This means senior coders with problem-solving skills honed over decades may become an endangered species. ... While concerns about AI diminishing human developer skills are valid, businesses shouldn’t dismiss AI-supported coding. They just need to think carefully about when and how to deploy AI tools in development. These tools can be more than productivity boosters; they can act as interactive mentors, guiding coders in real time with explanations, alternatives and best practices.


How Reassured Are You by Your Cloud Compliance?

For organizations, the assurance of a secure cloud hinges on proficient NHI management. By implementing a strategic plan, companies can significantly bolster their defenses against unauthorized access and potential threats. Understanding and managing machine identities becomes a crucial pillar of cloud assurance strategies. ... With organizations strive to maintain their competitive edge, the strategic importance of NHIs in ensuring compliance and security cannot be overstated. By fostering a culture of security awareness and leveraging robust management platforms, businesses can confidently navigate the complex terrain of cloud compliance. ... Compliance is a formidable challenge. However, NHI management offers actionable solutions to these challenges. By auditing and tracking NHIs, organizations gain unparalleled visibility into access patterns and potential breaches, ensuring adherence to relevant regulatory frameworks across multiple sectors. Automation of audit trails and enforcement of policies can significantly reduce the burden on compliance teams, allowing companies to focus on strategic areas of business development. Additionally, adaptive NHI management systems can be scaled and updated to align with new compliance standards. This flexibility positions businesses to react quickly to regulatory changes without incurring significant downtime or resource allocation shifts.


AI Powered SOC: The Shift from Reactive to Resilient

Current SOC operations are described as “buried — not just in alert volume, but in disconnected tools, fragmented telemetry, expanding cloud workloads, and siloed data.” This paints a picture of overwhelmed teams struggling to maintain control in an increasingly complex threat landscape. ... With AI Agents, automated response actions, such as containment and remediation, can be executed with human oversight for high-impact situations. AI can handle routine containment and remediation tasks, such as isolating a compromised host or blocking a malicious hash. After an action is taken, the AI can perform validation checks to ensure business operations are not negatively impacted, with automatic rollback triggers if necessary. ... This transition is not a flip of a switch; it is a strategic journey. The organizations that succeed will be those who invest in integrating AI with existing security ecosystems, upskill their talent to work with these new technologies, and ensure robust governance is in place. Embracing an AI-powered SOC is no longer optional but a strategic imperative. By building a partnership between human expertise and machine efficiency, organizations will transform their security operations from a vulnerable cost center into a resilient and agile business enabler. AI is not a silver bullet—but it’s a strategic lever. The SOC of the future won’t just detect threats; it will predict, prevent, and persist. Shifting to resilience means embracing AI not as a tool, but as a partner in defending digital trust.


Cybersecurity As A Strategy: The CIO’s Playbook for a Perma-Threat Landscape

When cybersecurity is seen as a strategic function, it helps businesses stay strong. It protects intellectual property, makes sure that rules are followed, and builds the trust of customers, partners, and other stakeholders. It can also help businesses be more innovative by letting them look into new markets, use new technologies, and change how they do business with confidence. The main point of this playbook is simple: CIOs need to stop using reactive defense models and start seeing cybersecurity as a key part of their business strategy. In a world where threats are always present, the companies that do well will be the ones whose leaders see cyber resilience as important for brand reputation, business continuity, and staying ahead of the competition. ... In this situation, being reactive is not only dangerous, it’s also costly. The costs of a cyberattack go well beyond fixing the damage right away. Companies can be fined by the government, sued, lose money when their systems go down, and have to pay more for insurance. The reputational damage can be even more devastating: loss of customer trust, decreased investor confidence, and long-term brand erosion. According to studies in the field, the average cost of a data breach is now over a million dollars, and high-profile cases have cost hundreds of millions. ... CIOs need to stop thinking about “building walls and patching holes” and start thinking about how to find, stop, and neutralize threats before they can do any damage. 


What to look for in a data protection platform for hybrid clouds

Data protection is a broad category that includes data security but also encompasses backup and disaster recovery, safe data storage, business continuity and resilience, and compliance with data privacy regulations. ... In the public cloud model, the hyperscalers (such as Amazon Web Services, Google Cloud, and Microsoft Azure) are responsible for protecting their own infrastructure, but the enterprise using them — you — is responsible for properly configuring and managing its own data in the cloud. One of the most common causes of cloud-based data breaches is a simple misconfiguration of an Amazon S3 storage bucket. Cloud security posture management (CSPM) tools can help identify misconfigurations, among other risks. ... Data protection can be performed with on-premises appliances or in the cloud. And organizations can manage their data protection functionality themselves or turn to a managed service. The trend lines are clear: Just as applications and data are moving to the cloud, data protection is moving to the cloud as well, due to the scalability, flexibility, and accessibility that the cloud provides. ... Because every enterprise is different and because hybrid clouds are both complex and varied in their handling of data, you need to get a clear grasp on your specific needs, capabilities, and resources before engaging prospective vendors and then choosing specific solutions for data protection.


Git Services Need Better Security. Here’s How End-to-End Encryption Could Help

Most development teams rely on platforms like GitHub, GitLab, or Bitbucket to manage their projects and collaborate across teams. These services work well for version control and collaboration, but there’s a problem. System breaches have become common, and the data stored in repositories can be highly valuable to attackers. Think about what’s in your repositories. Source code, API keys, infrastructure configurations, and the complete history of your project’s development. If someone gains unauthorized access to your Git service provider’s systems, they can access all of that. Current solutions don’t effectively address this problem. Some open-source projects have attempted to add encryption to Git workflows, but they suffer from two major issues: weak security guarantees and poor performance. The overhead is so large that most teams won’t adopt them. ... End-to-end encryption for Git services would mean that even if your service provider’s systems are compromised, your code remains secure. The provider wouldn’t have the keys to decrypt your repositories. This level of security has become standard for messaging apps and cloud storage. It makes sense to apply the same principles to Git services, especially given the value of what’s stored there. For regulated industries, this could help meet compliance requirements. For any organization with valuable intellectual property, it adds an important layer of protection.


Bringing authentication into the AI century

Today’s customer journey flows much differently than before, spreading across devices, shaped by automation, and powered by artificial intelligence (AI) assistants. What worked five years or even one year ago might already be standing in the way of creating impactful experiences. ... Authentication flows that anticipate outdated behavior and patterns, like expecting static sessions and manual inputs, aren’t able to keep up with the new normal of digital commerce. Patterns that used to look suspicious, including ultra-fast clicks and cross-device shopping, might be totally legitimate. However, if legacy systems can’t tell the difference, the experience of real customers will suffer. They might get flagged as fraud and experience friction, ultimately ending in a negative experience and a lost sale. Furthermore, you must choose the right authentication method in accordance with specific fraud MOs to avoid letting fraud slip through the cracks. ... Leaders don’t need to choose between protecting their business and giving customers the smooth experience they expect. Modern authentication must be built on trust, timing, and intelligence, rather than interruptions. ... Authentication needs to be just as dynamic as today’s fraudsters. It’s not about adding more steps; it’s about smarter context, stronger signals, and systems that can keep up. When trust drives your flow, authentication works seamlessly in the background, keeping real customers loyal and real risks out.


From Automation to Autonomy: Agentic AI set to transform India’s telecom sector

KPMG’s report introduces the Agentic AI Stack for Indian Telcos, a six-layer model covering customer experience, network intelligence, orchestration, data integration, and governance, designed to guide operators from traditional networks toward intelligent, autonomous systems. Current adoption trends show that half of telecom companies have implemented their first GenAI use case, and business leaders are planning to invest USD 25 million in new tech talent and USD 24 million in customer experience initiatives over the next 12 months. Looking ahead, KPMG recommends that telecom operators scale AI pilots to enterprise-wide deployments with AI-ready infrastructure and skilled teams, while policymakers should create agile regulations and governance frameworks to enable safe and responsible AI innovation. Collaboration among startups, academia, and industry partners is critical to building an inclusive and intelligent telecom ecosystem. “Agentic AI is more than a technological advancement — it is a strategic paradigm shift that empowers telecom operators to move from reactive to autonomous systems,” said Akhilesh Tuteja, Partner & National Leader – Technology, Media and Telecommunications (TMT), KPMG in India. “This transformation will unlock new levels of operational efficiency, customer personalization, and revenue growth. India’s unparalleled scale, data richness, and innovation ecosystem uniquely position it to lead the global telecom AI revolution.”


TRIAL: Charting the Path from SCREAM to AARAM – A Simplified Guide for Effective Enterprise Architecture

Despite billions invested annually in enterprise architecture (EA), organizations grapple with a persistent gap between theoretical frameworks and practical execution. In 2025, 94% of CIOs deem EA “absolutely critical” for embedding sustainability and driving digital resilience, yet 57% of architects report feeling underutilized in strategic initiatives. ... At its core, architecture is about effectively managing the lifecycle changes of architecture components and their relationships. TRIAL establishes an EA approach that resonates with architects and stakeholders by embracing these lifecycle stages as central motifs. This approach captures and builds a data and AI-driven architecture around its underlying evolving repository continuum, leveraging the same engagement model for collaborative execution aligned with organizational objectives. ... Enterprise architecture maturity traditionally requires skilled resources, extensive knowledge, and significant time investment. Organizations face resource scarcity while architects average only 18-24 months tenure, making adaptive architecture management nearly impossible. This challenge is exacerbated by broader technology trends, where 70-85% of enterprise AI projects fail due to poor data management, misalignment with business goals, and architectural oversights—rates double those of non-AI IT projects. TRIAL addresses this through progressive maturity states that build upon each other. Organizations advance through clearly defined maturity levels—from Balanced (foundation) through Yearly (planning),


Ask a Data Ethicist: Is It Wrong to Digitally “Resurrect” Someone?

There was even a situation recently which saw the recreation of a murdered person deliver an AI impact statement in court – literally speaking from beyond the grave. This marked a legal first and raised a lot of controversy over whether this was a type of emotional manipulation or an reasonable opportunity to give the victim a voice. It’s clear though, that the door is not open for others to do this, raising more of these questions, particularly as the tools to make this type of AI are now widely available. ... Data privacy laws afforded a level of protection when it comes to our personal data. However, personal data is not personal data if you are no longer alive. That is to say, data protection laws don’t extend to the deceased. The laws exist to protection living individuals. ... It’s a complex question with no “one size fits all” response. The answer might depend on several factors including: Their wishes as outlined in their will; The wishes of their family and estate; How they will be represented in this new digital form; Who controls the digital entity; and Who might be compensated or stand to gain from the digital entity. Increasingly, all of us might want to plan for our digital afterlife, including whether or not we want a digital afterlife. Having conversations with loved ones now about their wishes for their data and other digital assets, including what should or should not be done with these when they are gone, can provide clear guidance for making an ethical choice with respect to the question of digital resurrection.