Daily Tech Digest - June 20, 2025


Quote for the day:

"Everything you’ve ever wanted is on the other side of fear." -- George Addair



Encryption Backdoors: The Security Practitioners’ View

On the one hand, “What if such access could deliver the means to stop crime, aid public safety and stop child exploitation?” But on the other hand, “The idea of someone being able to look into all private conversations, all the data connected to an individual, feels exposing and vulnerable in unimaginable ways.” As a security practitioner he has both moral and practical concerns. “Even if lawful access isn’t the same as mass surveillance, it would be difficult to distinguish between ‘good’ and ‘bad’ users without analyzing them all.” Morally, it is a reversal of the presumption of innocence and means no-one can have any guaranteed privacy. Professionally he says, “Once the encryption can be broken, once there is a backdoor allowing someone to access data, trust in that vendor will lessen due to the threat to security and privacy introducing another attack vector into the equation.” It is this latter point that is the focus for most security practitioners. “From a practitioner’s standpoint,” says Rob T Lee, chief of research at SANS Institute and founder at Harbingers, “we’ve seen time and again that once a vulnerability exists, it doesn’t stay in the hands of the ‘good guys’ for long. It becomes a target. And once it’s exploited, the damage isn’t theoretical. It affects real people, real businesses, and critical infrastructure.”


Visa CISO Subra Kumaraswamy on Never Allowing Cyber Complacency

Kumaraswamy is always thinking about talent and technology in cybersecurity. Talent is a perennial concern in the industry, and Visa is looking to grow its own. The Visa Payments Learning Program, launched in 2023, aims to help close the skills gap in cyber through training and certification. “We are offering this to all of the employees. We’re offering it to our partners, like the banks, our customers,” says Kumaraswamy. Right now, Visa leverages approximately 115 different technologies in cyber, and Kumaraswamy is constantly evaluating where to go next. “How do I [get to] the 116th, 117th, 181th?” he asks. ”That needs to be added because every layer counts.” Of course, GenAI is a part of that equation. Thus far, Kumaraswamy and his team are exploring more than 80 different GenAI initiatives within cyber. “We’ve already taken about three to four of those initiatives … to the entire company. That includes the what we call a ‘shift left’ process within Visa. It is now enabled with agentic AI. It’s reducing the time to find bugs in the code. It is also helping reduce the time to investigate incidents,” he shares. Visa is also taking its best practices in cybersecurity and sharing them with their customers. “We can think of this as value-added services to the mid-size banks, the credit unions, who don’t have the scale of Visa,” says Kumaraswamy.


Agentic AI in automotive retail: Creating always-on sales teams

To function effectively, digital agents need memory. This is where memory modules come into play. These components store key facts about ongoing interactions, such as the customer’s vehicle preferences, budget, and previous questions. For instance, if a returning visitor had previously shown interest in SUVs under a specific price range, the memory module allows the AI to recall that detail. Instead of restarting the conversation, the agent can pick up where it left off, offering an experience that feels personalised and informed. Memory modules are critical for maintaining consistency across long or repeated interactions. Without them, agentic AI would struggle to replicate the attentive service provided by a human salesperson who remembers returning customers. ... Despite the intelligence of agentic AI, there are scenarios where human involvement is still needed. Whether due to complex financing questions or emotional decision-making, some buyers prefer speaking to a person before finalizing their decision. A well-designed agentic system should recognize when it has reached the limits of its capabilities. In such moments, it should facilitate a handover to a human representative. This includes summarizing the conversation so far, alerting the sales team in real-time, and scheduling a follow-up if required.


Multicloud explained: Why it pays to diversify your cloud strategy

If your cloud provider were to suffer a massive and prolonged outage, that would have major repercussions on your business. While that’s pretty unlikely if you go with one of the hyperscalers, it’s possible with a more specialized vendor. And even with the big players, you may discover annoyances, performance problems, unanticipated charges, or other issues that might cause you to rethink your relationship. Using services from multiple vendors makes it easier to end a relationship that feels like it’s gone stale without you having to retool your entire infrastructure. It can be a great means to determine which cloud providers are best for which workloads. And it can’t hurt as a negotiating tactic when contracts expire or when you’re considering adding new cloud services. ... If you add more cloud resources by adding services from a different vendor, you’ll need to put in extra effort to get the two clouds to play nicely together, a process that can range from “annoying” to “impossible.” Even after bridging the divide, there’s administrative overhead involved—it’ll be harder to keep tabs on data protection and privacy, for instance, and you’ll need to track cloud usage and the associated costs for multiple vendors. Network bandwidth. Many vendors make it cheap and easy to move data to and within their cloud, but might make you pay a premium to export it. 


Decentralized Architecture Needs More Than Autonomy

Decentralized architecture isn’t just a matter of system design - it’s a question of how decisions get made, by whom, and under what conditions. In theory, decentralization empowers teams. In practice, it often exposes a hidden weakness: decision-making doesn’t scale easily. We started to feel the cracks as our teams expanded quickly and our organizational landscape became more complex. As teams multiplied, architectural alignment started to suffer - not because people didn’t care, but because they didn’t know how or when to engage in architectural decision-making. ... The shift from control to trust requires more than mindset - it needs practice. We leaned into a lightweight but powerful toolset to make decentralized decision-making work in real teams. Chief among them is the Architectural Decision Record (ADR). ADRs are often misunderstood as documentation artifacts. But in practice, they are confidence-building tools. They bring visibility to architectural thinking, reinforce accountability, and help teams make informed, trusted decisions - without relying on central authority. ... Decentralized architecture works best when decisions don’t happen in isolation. Even with good individual practices - like ADRs and advice-seeking - teams still need shared spaces to build trust and context across the organization. That’s where Architecture Advice Forums come in.


4 new studies about agentic AI from the MIT Initiative on the Digital Economy

In their study, Aral and Ju found that human-AI pairs excelled at some tasks and underperformed human-human pairs on others. Humans paired with AI were better at creating text but worse at creating images, though campaigns from both groups performed equally well when deployed in real ads on social media site X. Looking beyond performance, the researchers found that the actual process of how people worked changed when they were paired with AI . Communication (as measured by messages sent between partners) increased for human-AI pairs, with less time spent on editing text and more time spent on generating text and visuals. Human-AI pairs sent far fewer social messages, such as those typically intended to build rapport. “The human-AI teams focused more on the task at hand and, understandably, spent less time socializing, talking about emotions, and so on,” Ju said. “You don’t have to do that with agents, which leads directly to performance and productivity improvements.” As a final part of the study, the researchers varied the assigned personality of the AI agents using the Big Five personality traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism. The AI personality pairing experiments revealed that programming AI personalities to complement human personalities greatly enhanced collaboration. 


DevOps Backup: Top Reasons for DevOps and Management

Depending on the industry, you may need to comply with different security protocols, acts, certifications, and standards. If your company operates in a highly regulated industry, like healthcare, technology, financial services, pharmaceuticals, manufacturing, or energy, those security and compliance regulations and protocols can be even more strict. Thus, to meet the compliance stringent security requirements, your organization needs to implement security measures, like role-based access controls, encryption, ransomware protection measures — just to name RTOs and RPOs, risk-assessment plans, and other compliance best practices… And, of course, a backup and disaster recovery plan is one of them, too. It ensures that the company will be able to restore its critical data fast, guaranteeing the data availability, accessibility, security, and confidentiality of your data. ... Another issue that is closely related to compliance is data retention. Some compliance regulations require organizations to keep their data for a long time. As an example, we can mention NIST’s requirements from its Security and Privacy Controls for Information Systems and Organizations: “… Storing audit records on separate systems or components applies to initial generation as well as backup or long-term storage of audit records…”


How AI can save us from our 'infinite' workdays, according to Microsoft

Activity is not the same as progress. What good is work if it's just busy work and not tackling the right tasks or goals? Here, Microsoft advises adopting the Pareto Principle, which postulates that 20% of the work should deliver 80% of the outcomes. And how does this involve AI? Use AI agents to handle low-value tasks, such as status meetings, routine reports, and administrative churn. That frees up employees to focus on deeper tasks that require the human touch. For this, Microsoft suggested watching the leadership keynote from the Microsoft 365 Community Conference on Building the Future Firm. ... Instead of using an org chart to delineate roles and responsibilities, turn to a work chart. A work chart is driven more by outcome, in which teams are organized around a specific goal. Here, you can use AI to fill in some of the gaps, again freeing up employees for more in-depth work. ... Finally, Microsoft pointed to a new breed of professionals known as agent bosses. They handle the infinite workday not by putting in more hours but by working smarter. One example cited in the report is Alex Farach, a researcher at Microsoft. Instead of getting swamped in manual work, Farach uses a trio of AI agents to act as his assistants. One collects daily research. The second runs statistical analysis. And the third drafts briefs to tie all the data together.
 

Data Governance and AI Governance: Where Do They Intersect?

AIG and DG share common responsibilities in guiding data as a product that AI systems create and consume, despite their differences. Both governance programs evaluate data integration, quality, security, privacy, and accessibility. For instance, both governance frameworks need to ensure quality information meets business needs. If a major retailer discovered their AI-powered product recommendation engine was suggesting irrelevant items to customers, then DG and AIG would want the issue resolved. However, either approach or a combination could be best to solving the problem. Determining the right governance response requires analyzing the root issue. ... DG and AIG provide different approaches; which works best depends on the problem. Take the example, above, of the inaccurate pricing information to a customer in response to a query. The data governance team audits the product data pipeline and finds inconsistent data standards and missing attributes feeding into the AI model. However, the AI governance team also identifies opportunities to enhance the recommendation algorithm’s logic for weighting customer preferences. The retailer could resolve the data quality issues through DG while AIG improved the AI model’s mechanics by taking a collaborative approach with both data governance and AI governance perspectives. 


Deepfake Rebellion: When Employees Become Targets

Surviving and mitigating such an attack requires moving beyond purely technological solutions. While AI detection tools can help, the first and most critical line of defense lies in empowering the human factor. A resilient organization builds its bulwarks on human risk management and security awareness training, specifically tailored to counter the mental manipulation inherent in deepfake attacks. Rapidly deploy trained ambassadors. These are not IT security personnel, but respected peers from diverse departments trained to coach workshops. ... Leadership must address employees first, acknowledge the incident, express understanding of the distress caused, and unequivocally state the deepfake is under investigation. Silence breeds speculation and distrust. There should be channels for employees to voice concerns, ask questions, and access support without fear of retribution. This helps to mitigate panic and rebuild a sense of community. Ensure a unified public response, coordinating Comms, Legal, and HR. ... The antidote to synthetic mistrust is authentic trust, built through consistent leadership, transparent communication, and demonstrable commitment to shared values. The goal is to create an environment where verification habits are second nature. It’s about discerning malicious fabrication from human error or disagreement.

Daily Tech Digest - June 19, 2025


Quote for the day:

"Hardships often prepare ordinary people for an extraordinary destiny." -- C.S. Lewis


Introduction to Cloud Native Computing

In cloud native systems, security requires a different approach compared to traditional architectures. In a distributed system, the old “castle and moat” model of creating secure perimeter around vital systems, applications, APIs and data is not feasible. In a cloud native architecture, the “castles” are distributed across various environments — public and private cloud, on-prem — and they may pop up and disappear in seconds. ... DevSecOps integrates security practices within the DevOps process, ensuring that security is a shared responsibility and is considered at every stage of the software development life cycle. Implementing DevSecOps in a cloud native context helps organizations maintain robust security postures while capitalizing on the agility and speed of cloud native development. ... Cloud native applications often operate in dynamic environments that are subject to rapid changes. By adopting the following strategies and practices, cloud native applications can effectively scale in response to user demands and environmental changes, ensuring high performance and user satisfaction. ... By strategically adopting hybrid and multicloud approaches and effectively managing their complexities, organizations can significantly enhance their agility, resilience, and operational efficiency in the cloud native landscape. While hybrid and multicloud strategies offer benefits, they also introduce complexity in management. 


How a New CIO Can Fix the Mess Left by Their Predecessor

The new CIO should listen to IT teams, business stakeholders, and end-users to uncover pain points and achieve quick wins that will build credibility, says Antony Marceles, founder of Pumex, a software development and technology integration company in an online interview. Whether to rebuild or repair depends on the architecture's integrity. "Sometimes, patching legacy systems only delays the inevitable, but in other cases smart triage can buy time for a thoughtful transformation." ... Support can often come from unconventional corners, such as high-performing team leads, finance partners, or external advisors, all of whom may have experienced their own transitions, Marceles says. "The biggest mistake is trying to fix everything at once or imposing top-down change without context," he notes. "A new CIO needs to balance urgency with empathy, understanding that cleaning up someone else’s mess is as much about culture repair as it is about tech realignment." ... When you inherit a messy situation, it's both a technical and leadership challenge, de Silva says. "The best thing you can do is lead with transparency, make thoughtful decisions, and rebuild confidence across the organization." People want to see steady hands and clear thinking, he observes. "That goes a long way in these situations."


Every Business Is Becoming An AI Company. Here's How To Do It Right

“The extent to which we can use AI to augment the curious, driven and collaborative tendencies of our teams, the more optimistic we can be about their ability to develop new, unimagined innovations that open new streams of revenue,” Aktar writes. Otherwise, executives may expect more from employees without considering that new tech tools require training to use well, and troubleshooting to maintain. Plus, automated production routinely requires human intervention to protect quality. If executives merely expect teams to churn out more work — seeing AI tools and services as a way to reduce headcount — the result may be additional work and lower morale. “Workers report spending more time reviewing AI-generated content and learning tool complexities than the time these tools supposedly save,” writes Forbes contributor Luis Romero, the founder of GenStorm AI. ... “What draws people in now isn’t just communication. It’s the sense that someone notices effort before asking for output,” writes Forbes contributor Vibhas Ratanjee, a Gallup researcher who specializes in leadership development. “Most internal tools are built to save time. Fewer steps. Smoother clicks. But frictionless doesn’t always mean thoughtful. When we remove human pauses, we risk removing the parts that build connection.”


Four Steps for Turning Data Clutter into Competitive Power: Your Sovereign AI and Data Blueprint

The ability to act on data in real-time isn’t just beneficial—it’s a necessity in today’s fast-paced world. Accenture reports that companies able to leverage real-time data are 2.5 times more likely to outperform competitors. Consider Uber, which adjusts its pricing dynamically based on real-time factors like demand, traffic, and weather conditions. This near-instant capability drives business success by aligning offerings with evolving customer needs. Companies stand a lot to gain by giving frontline employees the ability to make informed, real-time decisions. But in order to do so, they need a near-instant understanding of customer data. This means the data needs to flow seamlessly across domains so that real-time models can provide timely information to help workers make impactful decisions. ... The success of AI initiatives depends on the ability to access, govern, and process at scale. Therefore, the success of an enterprise’s AI initiatives hinges on its ability to access its data anywhere, anytime—while maintaining compliance. These new demands require a governance framework that operates across environments—from on-premise to private and public clouds—while maintaining flexibility and compliance every step of the way. Companies like Netflix, which handles billions of daily data events, rely on sophisticated data architectures to support AI-driven recommendations.


Third-party risk management is broken — but not beyond repair

The consequences of this checkbox culture extend beyond ineffective risk management and have led to “questionnaire fatigue” among vendors. In many cases, security questionnaires are delivered as one-size-fits-all templates, an approach that floods recipients with static, repetitive questions, many of which aren’t relevant to their specific role or risk posture. Without tailoring or context, these reviews become procedural exercises rather than meaningful evaluations. The result is surface-level engagement, where companies appear to conduct due diligence but in fact miss critical insights. Risk profiles end up looking complete on paper while failing to capture the real-world complexity of the threats they’re meant to address. ... To break away from this harmful cycle, organizations must overhaul their approach to TPRM from the ground up by adopting a truly risk-based approach that moves beyond simple compliance. This requires developing targeted, substantive security questionnaires that prioritize depth over breadth and get to the heart of a vendor’s security practices. Rather than sending out blanket questionnaires, organizations should create assessments that are specific, relevant, and probing, asking questions that genuinely reveal the strengths and weaknesses of a vendor’s cybersecurity posture. This emphasis on quality over quantity in assessments allows organizations to move away from treating TPRM as a paperwork exercise and back toward its original intent: effective risk management.


The rise of agentic AI and what it means for ANZ enterprise

Agentic AI has unique benefits, but it also presents unique risks, and as more organisations adopt agentic AI, they're discovering that robust data governance— the establishment of policies, roles, and technology to manage and safeguard an organization's data assets—is essential when it comes to ensuring that these systems function securely and effectively. ... Effective governance is on the rise because it helps address critical AI-related security and productivity issues like preventing data breaches and reducing AI-related errors. Without strong data governance measures, agents may inadvertently expose sensitive information or make flawed autonomous decisions. With strong data governance measures, organisations can proactively safeguard their data by implementing comprehensive governance policies and deploying technologies to monitor AI runtime environments. This not only enhances security but also ensures that agentic AI tools operate optimally, delivering significant value with minimal risk. ... To grapple with these and other AI-related challenges, Gartner now recommends that organisations apply its AI TRiSM (trust, risk, and security management) frameworks to their data environments. Data and information governance are a key part of this framework, along with AI governance and AI runtime inspection and enforcement technology. 


Choosing a Clear Direction in the Face of Growing Cybersecurity Demands

CISO’s must balance multiple priorities with many facing overwhelming workloads, budget constraints, insufficient board-level support and unreasonable demands. From a revenue perspective they must align cybersecurity strategies with business goals, ensuring that security investments support revenue generation and protect critical assets. They’re under pressure to automate repetitive tasks, consolidating and streamlining processes while minimizing downtime and disruption. And then there is AI and the potential benefits it may bring to the security team and to the productivity of users. But all the while remembering that with AI, we have put technology in the hands of users, who have not traditionally been good with tech, because we’ve made it easier and quicker than ever before. ... They need to choose one key goal rather than trying to do everything. Do I want to “go faster” and innovate? Or do I want to become a more efficient business and “do more” with less Whichever they opt for, they also need to figure out all the different tools to use to accomplish that goal. This is where cybersecurity automation and AI comes into play. Using AI, machine learning, and automated tools to detect, prevent, and respond to cyber threats without human intervention, CISOs can streamline their security operations, reduce manual workload, and improve response times to cyberattacks and, in effect, do more with less.


Will AI replace humans at work? 4 ways it already has the edge

There are tasks that humans are perfectly good at but are not nearly as fast as AI. One example is restoring or upscaling images: taking pixelated, noisy or blurry images and making a crisper and higher-resolution version. Humans are good at this; given the right digital tools and enough time, they can fill in fine details. But they are too slow to efficiently process large images or videos. AI models can do the job blazingly fast, a capability with important industrial applications. ... AI will increasingly be used in tasks that humans can do well in one place at a time, but that AI can do in millions of places simultaneously. A familiar example is ad targeting and personalization. Human marketers can collect data and predict what types of people will respond to certain advertisements. This capability is important commercially; advertising is a trillion-dollar market globally. AI models can do this for every single product, TV show, website, and internet user. ... AI can be advantageous when it does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. 


8 steps to ensure data privacy compliance across borders

Given the conflicting and evolving nature of global privacy laws, a one-size-fits-all approach is ineffective. Instead, companies should adopt a baseline standard that can be applied globally. “We default to the strictest applicable standard,” says Kory Fong, VP of engineering at Private AI in Toronto. “Our baseline makes sure we can flexibly adapt to regional laws without starting from scratch each time a regulation changes.” ... “It’s about creating an environment where regulatory knowledge is baked into day-to-day decision making,” he says. “We regularly monitor global policy developments and involve our privacy experts early in the planning process so we’re prepared, not just reactive.” Alex Spokoiny, CIO at Israel’s Check Point Software Technologies, says to stay ahead of emerging regulations, his company has moved away from rigid policies to a much more flexible, risk-aware approach. “The key is staying close to what data we collect, where it flows, and how it’s used so we can adjust quickly when new rules come up,” he says. ... Effective data privacy management requires a multidisciplinary approach, involving IT, legal, compliance, and product teams. “Cross-functional collaboration is built into our steering teams,” says Lexmark’s Willett. “Over the years, we’ve fundamentally transformed our approach to data governance by establishing the Enterprise Data Governance and Ethics community.”


Leading without titles: The rise of influence-driven leadership

Leadership isn’t about being in charge—it’s about showing up when it matters, listening when it's hardest, and holding space when others need it most. It’s not about corner offices or formal titles—it’s about quiet strength, humility, and the courage to uplift. The leaders who will shape the future are not defined by their job descriptions, but by how they make others feel—especially in moments of uncertainty. The associate who lifts a teammate’s spirits, the manager who creates psychological safety, the engineer who ensures quieter voices are heard—these are the ones redefining leadership through compassion, not control. As Simon Sinek reminds us, "Leadership is not about being in charge. It is about taking care of those in your charge." Real leadership leaves people better than it found them. It inspires not by authority, but by action. It earns loyalty not through power, but through presence. According to Gartner (2024), 74% of employees are more likely to stay in organisations where leadership is approachable, transparent, and grounded in shared values—not status. Let’s recognise these leaders. Let’s build cultures that reward empathy, connection, and quiet courage. Because true leadership makes people feel seen—not small.

Daily Tech Digest - June 18, 2025


Quote for the day:

"Build your own dreams, or someone else will hire you to build theirs." -- Farrah Gray



Agentic AI adoption in application security sees cautious growth

The study highlights a considerable proportion of the market preparing for broader adoption, with nearly 50% of respondents planning to integrate agentic AI tools within the next year. The incremental approach taken by organisations reflects a degree of caution, particularly around the concept of granting AI systems the autonomy to make decisions independently.  ... The survey results illustrate the impact agentic AI could have on software development pipelines. Thirty percent of respondents believe integrating agentic AI into continuous integration and continuous deployment (CI/CD) pipelines would significantly enhance the process. The increased speed and frequency of code deployment-termed "vibe coding" in industry parlance-has led to faster development cycles. This acceleration does not necessarily alter the ratio of application security personnel to developers, but it can create the impression of a widening gap, with security teams struggling to keep up. ... Key findings from the survey reveal varied perceptions on the utility of agentic AI for security teams. Forty-four percent of those surveyed believe agentic AI's greatest benefit lies in supporting the identification, prioritisation, and remediation of vulnerabilities. 


Why Conventional Disaster Recovery Won’t Save You from Ransomware

Cyber incident recovery planning means taking measures that mitigate the unique challenges of ransomware recovery, such as: Immutable, offsite backups. These backups are stored offsite to minimise the risk that threat actors will be able to destroy backup data. While clean-room recovery environments serve as a secondary environment where workloads can be spun back up following a ransomware attack. This makes it possible to keep the original environment intact for forensics purposes while still performing rapid recovery. Finally, to avoid replicating the malware that led to the ransomware breach, cyber incident recovery must include a process for finding and extricating malware from backups prior to recovery. The unpredictable nature of ransomware attacks means that cyber incident recovery operations must be flexible enough to enable a nimble reaction to unexpected circumstances, like redeploying individual applications instead of simply replicating an entire server image if the server was compromised but the apps were not. ... Maintaining these capabilities can be challenging, even for organisations with extensive IT resources. In addition to the operational complexity of having to manage a secondary, clean-room recovery site and formulate intricate ransomware recovery plans, it’s costly to acquire and maintain the infrastructure necessary to ensure successful recovery.


Cybersecurity takes a big hit in new Trump executive order

Specific orders Trump dropped or relaxed included ones mandating (1) federal agencies and contractors adopt products with quantum-safe encryption as they become available in the marketplace, (2) a stringent Secure Software Development Framework (SSDF) for software and services used by federal agencies and contractors, (3) the adoption of phishing-resistant regimens such as the WebAuthn standard for logging into networks used by contractors and agencies, (4) the implementation new tools for securing Internet routing through the Border Gateway Protocol, and (5) the encouragement of digital forms of identity. ... Critics said the change will allow government contractors to skirt directives that would require them to proactively fix the types of security vulnerabilities that enabled the SolarWinds compromise. "That will allow folks to checkbox their way through 'we copied the implementation' without actually following the spirit of the security controls in SP 800-218," Jake Williams, a former hacker for the National Security Agency who is now VP of research and development for cybersecurity firm Hunter Strategy, said in an interview. "Very few organizations actually comply with the provisions in SP 800-218 because they put some onerous security requirements on development environments, which are usually [like the] Wild West."


Mitigating AI Threats: Bridging the Gap Between AI and Legacy Security

AI systems, particularly those with adaptive or agentic capabilities, evolve dynamically, unlike static legacy tools built for deterministic environments. This inconsistency renders systems vulnerable to AI-focused attacks, such as data poisoning, prompt injection, model theft, and agentic subversion—attacks that often evade traditional defenses. Legacy tools struggle to detect these attacks because they don’t followpredictable patterns, requiring more adaptive, AI-specific security solutions. Human flaws and behavior only worsen these weaknesses; insider attacks, social engineering, and insecure interactions with AI systems leave organizations vulnerable to exploitation. ... AI security frameworks like NIST’s AI Risk Management Framework incorporate human risk management to ensure that AI security practices align with organizational policies. Also modeled on the fundamental C.I.A. triad, the “manage” phase specifically includes employee training to uphold AI security principles across teams. For effective use of these frameworks, cross-departmental coordination is required. There needs to be collaboration among security staff, data scientists, and human resource practitioners to formulate plans that ensure AI systems are protected while encouraging their responsible and ethical use.


Modernizing your approach to governance, risk and compliance

Historically, companies treated GRC as an obligation to meet–and if legacy solutions were effective enough in meeting GRC requirements, organizations struggled to make a case for modernization. A better way to think about GRC is a means of maximizing the value for your company by tying out those efforts to unlock revenue and increased customer trust, and not simply by reducing risks, passing audits, and staying compliant. GRC modernization can open the door to a host of other benefits, such as increased velocity of operations and an enhanced team member (both GRC team members and internal control / risk owners alike) experience. For instance, for businesses that need to demonstrate compliance to customers as part of third-party or vendor risk management initiatives, the ability to collect evidence and share it with clients faster isn’t just a step toward risk mitigation. These efforts also help close more deals and speed up deal cycle time and velocity. When you view GRC as an enabler of business value rather than a mere obligation, the value of GRC modernization comes into much clearer focus. This vision is what businesses should embrace as they seek to move away from legacy GRC strategies that don’t waste time and resources, but fundamentally reduce their ability to stay competitive.


What is Cyberespionage? A Detailed Overview

Cyber espionage involves the unauthorized access to confidential information, typically to gain strategic, political, or financial advantage. This form of espionage is rooted in the digital world and is often carried out by state-sponsored actors or independent hackers. These attackers infiltrate computer systems, networks, or devices to steal sensitive data. Unlike cyber attacks, which primarily target financial gain, cyber espionage is focused on intelligence gathering, often targeting government agencies, military entities, corporations, and research institutions. ... One of the primary goals of cyber espionage is to illegally access trade secrets, patents, blueprints, and proprietary technologies. Attackers—often backed by foreign companies or governments—aim to acquire innovations without investing in research and development. Such breaches can severely damage a competitor’s advantage, leading to billions in lost revenue and undermining future innovation. ... Governments and other organizations often use cyber espionage to gather intelligence on rival nations or political opponents. Cyber spies may breach government networks or intercept communications to secretly access sensitive details about diplomatic negotiations, policy plans, or internal strategies, ultimately gaining a strategic edge in political affairs.


European Commission Urged to Revoke UK Data Adequacy Decision Due to Privacy Concerns

The items in question include sweeping new exemptions that allow law enforcement and government agencies to access personal data, loosening of regulations governing automated decision-making, weakening restrictions on data transfers to “third countries” that are otherwise considered inadequate by the EU, and increasing the possible ways in which the UK government would have power to interfere with the regular work of the UK Data Protection Authority. EDRi also cites the UK Border Security, Asylum and Immigration Bill as a threat to data adequacy, which has passed the House of Commons and is currently before the House of Lords. The bill’s terms would broaden intelligence agency access to customs and border control data, and exempt law enforcement agencies from UK GDPR terms. It also cites the UK’s Public Authorities (Fraud, Error and Recovery) Bill, currently scheduled to go before the House of Lords for review, which would allow UK ministers to order that bank account information be made available without demonstrating suspicion of wrongdoing. The civil society group also indicates that the UK ICO would likely become less independent under the terms of the UK Data Bill, which would give the UK government expanded ability to hire, dismiss and adjust the compensation of all of its board members.


NIST flags rising cybersecurity challenges as IT and OT systems increasingly converge through IoT integration

Connectivity can introduce significant challenges for organizations attempting to apply cybersecurity controls to OT and certain IoT products. OT equipment may use modern networking technologies like Ethernet or Wi-Fi, but is often not designed to connect to the internet. In many cases, OT and IoT systems prioritize trustworthiness aspects such as safety, resiliency, availability, and cybersecurity differently than traditional IT equipment, which can complicate control implementation. While IoT devices can sometimes replace OT equipment, they often introduce different or significantly expanded functionality that organizations must carefully evaluate before moving forward with replacement. Organizations should consider how other aspects of trustworthiness, such as safety, privacy, and resiliency, factor into their approach to cybersecurity. It is also important to address how they will manage the differences in expected service life between IT, OT, and IoT systems and their components. The agency identified that federal agencies are actively deploying IoT technologies to enhance connectivity, security, environmental monitoring, transportation, healthcare, and industrial automation.


How Organizations Can Cross the Operational Chasm

A fundamental shift in operational capability is reshaping the competitive landscape, creating a clear distinction between market leaders and laggards. This growing divide isn’t merely about technological adoption — it represents a strategic inflection point that directly affects market position, customer retention and shareholder value. ... The message is clear: Organizations must bridge this divide to remain competitive. Crossing this chasm requires more than incremental improvements. It demands a fundamental transformation in operational approach, embracing AI and automation to build the resilience necessary for today’s digital landscape. ... Digital operations resiliency is a proactive approach to safeguarding critical business services by reducing downtime and ensuring seamless customer experiences. It focuses on minimizing operational disruptions, protecting brand reputation and mitigating business risk through standardized incident management, automation and compliance with service-level agreements (SLAs). Real-time issue resolution, efficient workflows and continuous improvement are put into place to ensure operational efficiency at scale, helping to provide uninterrupted service delivery. 


7 trends shaping digital transformation in 2025 - and AI looms large

Poor integration is the common theme behind all these challenges. If agents are unable to access the data and capabilities they need to understand user queries, find a solution, and resolve these issues for them, their impact is severely limited. As many as 95% of IT leaders claim integration issues are a key factor that impedes AI adoption. ... The surge in demand for AI capabilities will exacerbate the problem of API and agent sprawl, which occurs when different teams and departments build integrations and automations without any centralized management or coordination. Already, an estimated quarter of APIs are ungoverned. Three-fifths of IT and security practitioners said their organizations had at least one data breach due to API exploitation, according to a 2023 study from the Ponemon Institute and Traceable. ... Robotic process automation (RPA) is already helping organizations enhance efficiency, cut operational costs, and reduce manual toil by up to two hours for each employee every week in the IT department alone. These benefits have driven a growing interest in RPA. In fact, we could see near-universal adoption of the technology by 2028, according to Deloitte. In 2025, organizations will evolve their use of RPA technology to reduce the need for humans at every stage of the operational process. 

Daily Tech Digest - June 17, 2025


Quote for the day:

"Next generation leaders are those who would rather challenge what needs to change and pay the price than remain silent and die on the inside." -- Andy Stanley



Understanding how data fabric enhances data security and governance

“The biggest challenge is fragmentation; most enterprises operate across multiple cloud environments, each with its own security model, making unified governance incredibly complex,” Dipankar Sengupta, CEO of Digital Engineering Services at Sutherland Global told InfoWorld. ... Shadow IT is also a persistent threat and challenge. According to Sengupta, some enterprises discover nearly 40% of their data exists outside governed environments. Proactively discovering and onboarding those data sources has become non-negotiable. ... A data fabric deepens organizations’ understanding and control of their data and consumption patterns. “With this deeper understanding, organizations can easily detect sensitive data and workloads in potential violation of GDPR, CCPA, HIPAA and similar regulations,” Calvesbert commented. “With deeper control, organizations can then apply the necessary data governance and security measures in near real time to remain compliant.” ... Data security and governance inside a data fabric shouldn’t just be about controlling access to data, it should also come with some form of data validation. The cliched saying “garbage-in, garbage-out” is all too true when it comes to data. After all, what’s the point of ensuring security and governance on data that isn’t valid in the first place?


AI isn’t taking your job; the big threat is a growing skills gap

While AI can boost productivity by handling routine tasks, it can’t replace the strategic roles filled by skilled professionals, Vianello said. To avoid those kinds of issues, agencies — just like companies — need to invest in adaptable, mission-ready teams with continuously updated skills in cloud, cyber, and AI. The technology, he said, should augment – not replace — human teams, automating repetitive tasks while enhancing strategic work. Success in high-demand tech careers starts with in-demand certifications, real-world experience, and soft skills. Ultimately, high-performing teams are built through agile, continuous training that evolves with the tech, Vianello said. “We train teams to use AI platforms like Copilot, Claude and ChatGPT to accelerate productivity,” Vianello said. “But we don’t stop at tools; we build ‘human-in-the-loop’ systems where AI augments decision-making and humans maintain oversight. That’s how you scale trust, performance, and ethics in parallel.” High-performing teams aren’t born with AI expertise; they’re built through continuous, role-specific, forward-looking education, he said, adding that preparing a workforce for AI is not about “chasing” the next hottest skill. “It’s about building a training engine that adapts as fast as technology evolves,” he said.


Got a new password manager? Don't leave your old logins exposed in the cloud - do this next

Those built-in utilities might have been good enough for an earlier era, but they aren't good enough for our complex, multi-platform world. For most people, the correct option is to switch to a third-party password manager and shut down all those built-in password features in the browsers and mobile devices you use. Why? Third-party password managers are built to work everywhere, with a full set of features that are the same (or nearly so) across every device. After you make that switch, the passwords you saved previously are left behind in a cloud service you no longer use. If you regularly switch between browsers (Chrome on your Mac or Windows PC, Safari on your iPhone), you might even have multiple sets of saved passwords scattered across multiple clouds. It's time to clean up that mess. If you're no longer using a password manager, it's prudent to track down those outdated saved passwords and delete them from the cloud. I've studied each of the four leading browsers: Google Chrome, Apple's Safari, Microsoft Edge, and Mozilla Firefox. Here's how to find the password management settings for each one, export any saved passwords to a safe place, and then turn off the feature. As a final step, I explain how to purge saved passwords and stop syncing.


AI and technical debt: A Computer Weekly Downtime Upload podcast

Given that GenAI technology hit the mainstream with GPT 4 two years ago, Reed says: “It was like nothing ever before.” And while the word “transformational” tends to be generously overused in technology he describes generative AI as “transformational with a capital T.” But transformations are not instant and businesses need to understand how to apply GenAI most effectively, and figure out where it does and does not work well. “Every time you hear anything with generative AI, you hear the word journey and we're no different,” he says. “We are trying to understand it. We're trying to understand its capabilities and understand our place with generative AI,” Reed adds. Early adopters are keen to understand how to use GenAI in day-to-day work, which, he says, can range from being an AI-based work assistant or a tool that changes the way people search for information to using AI as a gateway to the heavy lifting required in many organisations. He points out that bet365 is no different. “We have a sliding scale of ambition, but obviously like anything we do in an organisation of this size, it must be measured, it must be understood and we do need to be very, very clear what we're using generative AI for.” One of the very clear use cases for GenAI is in software development. 


Cloud Exodus: When to Know It's Time to Repatriate Your Workloads

Because of the inherent scalability of cloud resources, the cloud makes a lot of sense when the compute, storage, and other resources your business needs fluctuate constantly in volume. But if you find that your resource consumption is virtually unchanged from month to month or year to year, you may not need the cloud. You may be able to spend less and enjoy more control by deploying on-prem infrastructure. ... Cloud costs will naturally fluctuate over time due to changes in resource consumption levels. It's normal if cost increases correlate with usage increases. What's concerning, however, is a spike in cloud costs that you can't tie to consumption changes. It's likely in that case that you're spending more either because your cloud service provider raised its prices or your cloud environment is not optimized from a cost perspective. ... You can reduce latency (meaning the delay between when a user requests data on the network and when it arrives) on cloud platforms by choosing cloud regions that are geographically proximate to your end users. But that only works if your users are concentrated in certain areas, and if cloud data centers are available close to them. If this is not the case, you are likely to run into latency issues, which could dampen the user experience you deliver. 


The future of data center networking and processing

The optical-to-electrical conversion that is performed by the optical transceiver is still needed in a CPO system, but it moves from a pluggable module located at the faceplate of the switching equipment to a small chip (or chiplet) that is co-packaged very closely to the target ICs inside the box. Data center chipset heavyweights Broadcom and Nvidia have both announced CPO-based data center networking products operating at 51.2 and 102.4 Tb/s. ... Early generation CPO systems, such as those announced by Broadcom and Nvidia for Ethernet switching, make use of high channel count fiber array units (FAUs) that are designed to precisely align the fiber cores to their corresponding waveguides inside the PICs. These FAUs are challenging to make as they require high fiber counts, mixed single-mode (SM) and polarization maintaining (PM) fibers, integration of micro-optic components depending on the fiber-to-chip coupling mechanism, highly precise tolerance alignments, CPO-optimized fibers and multiple connector assemblies.  ... In addition to scale and cost benefits, extreme densities can be achieved at the edge of the PIC by bringing the waveguides very close together, down to about 30µm, which is far more than what can be achieved with even the thinnest fibers. Next generation fiber-to-chip coupling will enable GPU optics – which will require unprecedented levels of density and scale.


Align AI with Data, Analytics and Governance to Drive Intelligent, Adaptive Decisions and Actions Across the Organisation

Unlocking AI’s full business potential requires building executive AI literacy. They must be educated on AI opportunities, risks and costs to make effective, future-ready decisions on AI investments that accelerate organisational outcomes. Gartner recommends D&A leaders introduce experiential upskilling programs for executives, such as developing domain-specific prototypes to make AI tangible. This will lead to greater and more appropriate investment in AI capabilities. ... Using synthetic data to train AI models is now a critical strategy for enhancing privacy and generating diverse datasets. However, complexities arise from the need to ensure synthetic data accurately represents real-world scenarios, scales effectively to meet growing data demand and integrates seamlessly with existing data pipelines and systems. “To manage these risks, organisations need effective metadata management,” said Idoine. “Metadata provides the context, lineage and governance needed to track, verify and manage synthetic data responsibly, which is essential to maintaining AI accuracy and meeting compliance standards.” ... Building GenAI models in-house offers flexibility, control and long-term value that many packaged tools cannot match. As internal capabilities grow, Gartner recommends organisations adopt a clear framework for build versus buy decisions. 


Do microServices' Benefits Supersede Their caveats? A Conversation With Sam Newman

A microservice is one of those where it is independently deployable so I can make a change to it and I can roll out new versions of it without having to change any other part of my system. So things like avoiding shared databases are really about achieving that independent deployability. And it's a really simple idea that can be quite easy to implement if you know about it from the beginning. It can be difficult to implement if you're already in a tangled mess. And that idea of independent deployability has interesting benefits because the fact that something is independently deployable is obviously useful because it's low impact releases, but there's loads of other benefits that start to flow from that. ... The vast majority of people who tell me they've scaling issues often don't have them. They could solve their scaling issues with a monolith, no problem at all, and it would be a more straightforward solution. They're typically organizational scale issues. And so, for me, what the world needs from our IT's product-focused, outcome-oriented, and more autonomous teams. That's what we need, and microservices are an enabler for that. Having things like team topologies, which of course, although the DevOps topology stuff was happening around the time of my first edition of my book, that being kind of moved into the team topology space by Matthew and Manuel around the second edition again sort of helps kind of crystallize a lot of those concepts as well.


Why Businesses Must Upgrade to an AI-First Connected GRC System

Adopting a connected GRC solution enables organizations to move beyond siloed operations by bringing risk and compliance functions onto a single, integrated platform. It also creates a unified view of risks and controls across departments, bringing better workflows and encouraging collaboration. With centralized data and shared visibility, managing complex, interconnected risks becomes far more efficient and proactive. In fact, this shift toward integration reflects a broader trend that is seen in the India Regulatory Technology Business Report 2024–2029 findings, which highlight the growing adoption of compliance automation, AI, and machine learning in the Indian market. The report points to a future where GRC is driven by data, merging operations, technology, and control into a single, intelligent framework. ... An AI-first, connected GRC solution takes the heavy lifting out of compliance. Instead of juggling disconnected systems and endless updates, it brings everything together, from tracking regulations to automating actions to keeping teams aligned. For compliance teams, that means less manual work and more time to focus on what matters. ... A smart, integrated GRC solution brings everything into one place. It helps organizations run more smoothly by reducing errors and simplifying teamwork. It also means less time spent on admin and better use of people and resources where they are really needed.


The Importance of Information Sharing to Achieve Cybersecurity Resilience

Information sharing among different sectors predominantly revolves around threats related to phishing, vulnerabilities, ransomware, and data breaches. Each sector tailors its approach to cybersecurity information sharing based on regulatory and technological needs, carefully considering strategies that address specific risks and identify resolution requirements. However, for the mobile industry, information sharing relating to cyberattacks on the networks themselves and misuse of interconnection signalling are also the focus of significant sharing efforts. Industries learn from each other by adopting sector-specific frameworks and leveraging real-time data to enhance their cybersecurity posture. This includes real-time sharing of indicators of compromise (IoCs) and the techniques, tactics, and procedures (TTPs) associated with phishing campaigns. An example of this is the recently launched Stop Scams UK initiative, bringing together tech, telecoms and finance industry leaders, who are going to share real-time data on fraud indicators to enhance consumer protection and foster economic security. This is an important development, as without cross-industry information sharing, determining whether a cybersecurity attack campaign is sector-specific or indiscriminate becomes difficult. 

Daily Tech Digest - June 16, 2025


Quote for the day:

"A boss has the title, a leader has the people." -- Simon Sinek


How CIOs are getting data right for AI

Organizations that have taken steps to better organize their data are more likely to possess data maturity, a key attribute of companies that succeed with AI. Research firm IDC defines data maturity as the use of advanced data quality, cataloging and metadata, and data governance processes. The research firm’s Office of the CDO Survey finds firms with data maturity are far more likely than other organizations to have generative AI solutions in production. ... “We have to be mindful of what we put into public data sets,” says Yunger. With that caution in mind, Servier has built a private version of ChatGPT on Microsoft Azure to ensure that teams benefit from access to AI tools while protecting proprietary information and maintaining confidentiality. The gen AI implementation is used to speed the creation of internal documents and emails, Yunger says. In addition, personal data that might crop up in pharmaceutical trials must be treated with the utmost caution to comply with the European Union’s AI Act,  ... To achieve what he calls “sustainable AI,” AES’s Reyes counsels the need to strike a delicate balance: implementing data governance, but in a way that does not disrupt work patterns. He advises making sure everyone at your company understands that data must be treated as a valuable asset: With the high stakes of AI in play, there is a strong reason it must be accurately cataloged and managed.


Alan Turing Institute reveals digital identity and DPI risks in Cyber Threats Observatory Workshop

The trend indicates that threat actors could be targeting identity mechanisms such as authentication, session management, and role-based access systems. The policy implication for governments translates to a need for more detailed cyber incident reporting across all critical sectors, the institute recommends. An issue is the “weakest link” problem. A well-resourced sector like finance might invest in strong security, but their dependence on, say, a national ID system means they are still vulnerable if that ID system is weak. The institute believes this calls for viewing DPI security as a public good. Improvements in one sector’s security, such as “hardened” digital ID protocols, could benefit other sectors’ security. Integrating security and development teams is recommended as is promoting a culture of shared cyber responsibility. Digital ID, government, healthcare, and finance must advance together on the cybersecurity maturity curve, the report says, as a weakness in one can undermine the public’s trust in all. The report also classifies CVEs by attack vectors: Network, Local, Adjacent Network, and Physical. Remote Network threats were dominant, particularly affecting finance and digital identity platforms. But Local and Physical attack surfaces, especially in health and government, are increasingly relevant due to on-premise systems and biometric interfaces, according to the Cyber Threat Observatory.


The Advantages Of Machine Learning For Large Restaurant Chains

Machine learning can not only assist in the present activities but contribute to steering long-term planning and development. Decision-makers can use these trends to notice opportunities to explore new markets, develop new products, or redistribute resources when they discover the patterns across the different locations, customer groups, and product categories. These insights dig deeper into the superficial data and reveal trends that might not have been apparent by just manual analysis. The capability to make data-driven decisions becomes even more significant with the growth of restaurant chains. Machine learning tools provide scalable insights that can be applied in parallel with the rest of the business objectives when combined with other technologies like a drive thru system or cloud-based analytics platforms. The opening of a new venue or the optimizing of an advertisement campaign, machine learning enables the management levels to have the information needed to make a decision with assured confidence and competence. ... Machine learning is transforming how major restaurant chains run their business, providing an unbeatable mix of accuracy, speed, and flexibility over their older equivalents. 


How Staff+ Engineers Can Develop Strategic Thinking

For risk and innovation, you need to understand what your organization values the most. Everybody has a culture memo and a set of tenets they follow, but these are part of unsaid rules, something that every new hire will learn by the first week of their onboarding, which is not written out loud and clear. In my experience, there are different kinds of organizations. Some care about execution, like results above everything, top line, bottom line. Others care about data-driven decision-making, customer sentiment, and keeping adapting. There are others who care about storytelling and relationships. What does this really mean? If you fail to influence, if you fail to tell a story about what ideas you have, what you're really trying to do, to build trust and relationships, you may not succeed in that environment, because it's not enough for you to be smart and knowing it all. You also need to know how to convey your ideas and influence people. When you talk about innovation, there are companies that really pride themselves on experimentation, staying ahead of the curve. You can look at this by how many of them have an R&D department, and how much funding they put into that. Then, what's their role in the open-source community, and how much they contribute towards it.


Legal and Policy Responses to Spyware: A Primer

There have been a number of international efforts to combat at least some aspects of the harms of commercial spyware. These include the US-led Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware and the Pall Mall Process, an ongoing multistakeholder undertaking focussed on this issue. So far, principles, norms, and calls for businesses to comply with the United Nations Guiding Principles on Business and Human Rights (UNGPs) have emerged, and Costa Rica has called for a full moratorium, but no well-orchestrated international action has been fully brought to fruition. However, private companies and individuals, regulators, and national or regional governments have taken action, employing a wide range of legal and regulatory tools. Guidelines and proposals have also been articulated by governmental and non-governmental organizations, but we will focus here on measures that are existent and, at least in theory, enforceable. While some attempts at combating spyware, like WhatsApp’s, have been effective, others have not. Analyzing the strengths and weaknesses of each approach is beyond the scope of this article, and, considering the international nature of spyware, what fails in one jurisdiction may be successful in another.


Red Teaming AI: The Build Vs Buy Debate

In order to red team your AI model, you need to have a deep understanding of the system you are protecting. Today’s models are complex multimodal, multilingual systems. One model might take in text, images, code, and speech with any single input having the potential to break something. Attackers know this and can easily take advantage. For example, a QR code might contain an obfuscated prompt injection or a roleplay conversation might lead to ethical bypasses. This isn’t just about keywords, but about understanding how intent hides beneath layers of tokens, characters, and context. The attack surface isn’t just large, it’s effectively infinite. ... Building versus buying is an age-old debate. Fortunately, the AI security space is maturing rapidly, and organizations have a lot of choices to implement from. After you have some time to evaluate your own criteria against Microsoft, OWASP and NIST frameworks, you should have a good idea of what your biggest risks are and key success criteria. After considering risk mitigation strategies, and assuming you want to keep AI turned on, there are some open-source deployment options like Promptfoo and Llama Guard, which provide useful scaffolding for evaluating model safety. Paid platforms like Lakera, Knostic, Robust Intelligence, Noma, and Aim are pushing the edge on real-time, content-aware security for AI, each offering slightly different tradeoffs in how they offer protection. 


The Impact of Quantum Decryption

There are two key quantum mechanical phenomena, superposition and entanglement, that enable qubits to operate fundamentally differently than classical bits. Superposition allows a qubit to exist in a probabilistic combination of both 0 and 1 states simultaneously, significantly increasing the amount of information a small number of qubits can hold.  ... Quantum decryption of data stolen using current standards could have pervasive impacts. Government secrets, more long-term data, and intellectual property remain at significant risk even if decrypted years after a breach. Decrypted government communications, documents, or military strategies could compromise national security. An organization’s competitive advantage could be undermined by trade secrets being exposed. Meanwhile, data such as credit card information will diminish over time due to expiration dates and the issuance of new cards. ... For organizations, the ability of quantum computers to decrypt previously stolen data could result in substantial financial losses due to data breaches, corporate espionage, and potential legal liabilities. The exposure of sensitive corporate information, such as trade secrets and strategic plans, could provide competitors with an unfair advantage, leading to significant financial harm. 


Don't let a crisis of confidence derail your data strategy

In an age of AI, the options that range from on-premise facilities to colocation, or public, private and hybrid clouds, are business-critical decisions. These decisions are so important because such choices impact the compliance, cost efficiency, scalability, security, and agility that can make or break a business. In the face of such high stakes, it is hardly surprising that confidence is the battleground on which deals for digital infrastructure are fought. ... Commercially, Total Cost of Ownership (TCO) has become another key factor. Public cloud was heavily promoted on the basis of lower upfront costs. However, businesses have seen the "pay-as-you-go" model lead to escalating operational expenses. In contrast, businesses have seen the cost of colocation and private cloud become more predictable and attractive for long-term investment. Some reports suggest that at scale, colocation can offer significant savings over public cloud, while private cloud can also reduce costs by eliminating hardware procurement and management. Another shift in confidence has been that public cloud no longer guarantees the easiest path to growth. Public cloud has traditionally excelled in rapid, on-demand scalability. This agility was a key driver for adoption, as businesses sought to expand quickly.


The Anti-Metrics Era of Developer Productivity

The need to measure everything truly spiked during COVID when we started working remotely, and there wasn’t a good way to understand how work was done. Part of this also stemmed from management’s insecurities about understanding what’s going on in software engineering. However, when surveyed about the usefulness of developer productivity metrics, most leaders admit that the metrics they track are not representative of developer productivity and tend to conflate productivity with experience. And now that most of the code is written by AI, measuring productivity the same way makes even less sense. If AI improves programming effort by 30%, does that mean we get 30% more productivity?” ... Whether you call it DevEx or platform engineering, the lack of friction equals happy developers, which equals productive developers. In the same survey, 63% of developers said developer experience is important for their job satisfaction. ... Instead of building shiny dashboards, engineering leads should focus on developer experience and automated workflows across the entire software development life cycle: development, code reviews, builds, tests and deployments. This means focusing on solving real developer problems instead of just pointing at the problems.


Why banks’ tech-first approach leaves governance gaps

Integration begins with governance. When cybersecurity is properly embedded in enterprise-wide governance and risk management, security leaders are naturally included in key forums, including strategy discussions, product development, and M&A decision making. Once at the table, the cybersecurity team must engage productively. They must identify risks, communicate them in business terms AND collaborate with the business to develop solutions that enable business goals while operating within defined risk appetites. The goal is to make the business successful, in a safe and secure manner. Cyber teams that focus solely on highlighting problems risk being sidelined. Leaders must ensure their teams are structured and resourced to support business goals, with appropriate roles and encouragement of creative risk mitigation approaches. ... Start by ensuring there is a regulatory management function that actively tracks and analyzes emerging requirements. These updates should be integrated into the enterprise risk management (ERM) framework and governance processes—not handled in isolation. They should be treated no differently than any other new business initiatives. ... Ultimately, aligning cyber governance with regulatory change requires cross-functional collaboration, early engagement, and integration into strategic risk processes, not just technical or compliance checklists.