Daily Tech Digest - June 19, 2025


Quote for the day:

"Hardships often prepare ordinary people for an extraordinary destiny." -- C.S. Lewis


Introduction to Cloud Native Computing

In cloud native systems, security requires a different approach compared to traditional architectures. In a distributed system, the old “castle and moat” model of creating secure perimeter around vital systems, applications, APIs and data is not feasible. In a cloud native architecture, the “castles” are distributed across various environments — public and private cloud, on-prem — and they may pop up and disappear in seconds. ... DevSecOps integrates security practices within the DevOps process, ensuring that security is a shared responsibility and is considered at every stage of the software development life cycle. Implementing DevSecOps in a cloud native context helps organizations maintain robust security postures while capitalizing on the agility and speed of cloud native development. ... Cloud native applications often operate in dynamic environments that are subject to rapid changes. By adopting the following strategies and practices, cloud native applications can effectively scale in response to user demands and environmental changes, ensuring high performance and user satisfaction. ... By strategically adopting hybrid and multicloud approaches and effectively managing their complexities, organizations can significantly enhance their agility, resilience, and operational efficiency in the cloud native landscape. While hybrid and multicloud strategies offer benefits, they also introduce complexity in management. 


How a New CIO Can Fix the Mess Left by Their Predecessor

The new CIO should listen to IT teams, business stakeholders, and end-users to uncover pain points and achieve quick wins that will build credibility, says Antony Marceles, founder of Pumex, a software development and technology integration company in an online interview. Whether to rebuild or repair depends on the architecture's integrity. "Sometimes, patching legacy systems only delays the inevitable, but in other cases smart triage can buy time for a thoughtful transformation." ... Support can often come from unconventional corners, such as high-performing team leads, finance partners, or external advisors, all of whom may have experienced their own transitions, Marceles says. "The biggest mistake is trying to fix everything at once or imposing top-down change without context," he notes. "A new CIO needs to balance urgency with empathy, understanding that cleaning up someone else’s mess is as much about culture repair as it is about tech realignment." ... When you inherit a messy situation, it's both a technical and leadership challenge, de Silva says. "The best thing you can do is lead with transparency, make thoughtful decisions, and rebuild confidence across the organization." People want to see steady hands and clear thinking, he observes. "That goes a long way in these situations."


Every Business Is Becoming An AI Company. Here's How To Do It Right

“The extent to which we can use AI to augment the curious, driven and collaborative tendencies of our teams, the more optimistic we can be about their ability to develop new, unimagined innovations that open new streams of revenue,” Aktar writes. Otherwise, executives may expect more from employees without considering that new tech tools require training to use well, and troubleshooting to maintain. Plus, automated production routinely requires human intervention to protect quality. If executives merely expect teams to churn out more work — seeing AI tools and services as a way to reduce headcount — the result may be additional work and lower morale. “Workers report spending more time reviewing AI-generated content and learning tool complexities than the time these tools supposedly save,” writes Forbes contributor Luis Romero, the founder of GenStorm AI. ... “What draws people in now isn’t just communication. It’s the sense that someone notices effort before asking for output,” writes Forbes contributor Vibhas Ratanjee, a Gallup researcher who specializes in leadership development. “Most internal tools are built to save time. Fewer steps. Smoother clicks. But frictionless doesn’t always mean thoughtful. When we remove human pauses, we risk removing the parts that build connection.”


Four Steps for Turning Data Clutter into Competitive Power: Your Sovereign AI and Data Blueprint

The ability to act on data in real-time isn’t just beneficial—it’s a necessity in today’s fast-paced world. Accenture reports that companies able to leverage real-time data are 2.5 times more likely to outperform competitors. Consider Uber, which adjusts its pricing dynamically based on real-time factors like demand, traffic, and weather conditions. This near-instant capability drives business success by aligning offerings with evolving customer needs. Companies stand a lot to gain by giving frontline employees the ability to make informed, real-time decisions. But in order to do so, they need a near-instant understanding of customer data. This means the data needs to flow seamlessly across domains so that real-time models can provide timely information to help workers make impactful decisions. ... The success of AI initiatives depends on the ability to access, govern, and process at scale. Therefore, the success of an enterprise’s AI initiatives hinges on its ability to access its data anywhere, anytime—while maintaining compliance. These new demands require a governance framework that operates across environments—from on-premise to private and public clouds—while maintaining flexibility and compliance every step of the way. Companies like Netflix, which handles billions of daily data events, rely on sophisticated data architectures to support AI-driven recommendations.


Third-party risk management is broken — but not beyond repair

The consequences of this checkbox culture extend beyond ineffective risk management and have led to “questionnaire fatigue” among vendors. In many cases, security questionnaires are delivered as one-size-fits-all templates, an approach that floods recipients with static, repetitive questions, many of which aren’t relevant to their specific role or risk posture. Without tailoring or context, these reviews become procedural exercises rather than meaningful evaluations. The result is surface-level engagement, where companies appear to conduct due diligence but in fact miss critical insights. Risk profiles end up looking complete on paper while failing to capture the real-world complexity of the threats they’re meant to address. ... To break away from this harmful cycle, organizations must overhaul their approach to TPRM from the ground up by adopting a truly risk-based approach that moves beyond simple compliance. This requires developing targeted, substantive security questionnaires that prioritize depth over breadth and get to the heart of a vendor’s security practices. Rather than sending out blanket questionnaires, organizations should create assessments that are specific, relevant, and probing, asking questions that genuinely reveal the strengths and weaknesses of a vendor’s cybersecurity posture. This emphasis on quality over quantity in assessments allows organizations to move away from treating TPRM as a paperwork exercise and back toward its original intent: effective risk management.


The rise of agentic AI and what it means for ANZ enterprise

Agentic AI has unique benefits, but it also presents unique risks, and as more organisations adopt agentic AI, they're discovering that robust data governance— the establishment of policies, roles, and technology to manage and safeguard an organization's data assets—is essential when it comes to ensuring that these systems function securely and effectively. ... Effective governance is on the rise because it helps address critical AI-related security and productivity issues like preventing data breaches and reducing AI-related errors. Without strong data governance measures, agents may inadvertently expose sensitive information or make flawed autonomous decisions. With strong data governance measures, organisations can proactively safeguard their data by implementing comprehensive governance policies and deploying technologies to monitor AI runtime environments. This not only enhances security but also ensures that agentic AI tools operate optimally, delivering significant value with minimal risk. ... To grapple with these and other AI-related challenges, Gartner now recommends that organisations apply its AI TRiSM (trust, risk, and security management) frameworks to their data environments. Data and information governance are a key part of this framework, along with AI governance and AI runtime inspection and enforcement technology. 


Choosing a Clear Direction in the Face of Growing Cybersecurity Demands

CISO’s must balance multiple priorities with many facing overwhelming workloads, budget constraints, insufficient board-level support and unreasonable demands. From a revenue perspective they must align cybersecurity strategies with business goals, ensuring that security investments support revenue generation and protect critical assets. They’re under pressure to automate repetitive tasks, consolidating and streamlining processes while minimizing downtime and disruption. And then there is AI and the potential benefits it may bring to the security team and to the productivity of users. But all the while remembering that with AI, we have put technology in the hands of users, who have not traditionally been good with tech, because we’ve made it easier and quicker than ever before. ... They need to choose one key goal rather than trying to do everything. Do I want to “go faster” and innovate? Or do I want to become a more efficient business and “do more” with less Whichever they opt for, they also need to figure out all the different tools to use to accomplish that goal. This is where cybersecurity automation and AI comes into play. Using AI, machine learning, and automated tools to detect, prevent, and respond to cyber threats without human intervention, CISOs can streamline their security operations, reduce manual workload, and improve response times to cyberattacks and, in effect, do more with less.


Will AI replace humans at work? 4 ways it already has the edge

There are tasks that humans are perfectly good at but are not nearly as fast as AI. One example is restoring or upscaling images: taking pixelated, noisy or blurry images and making a crisper and higher-resolution version. Humans are good at this; given the right digital tools and enough time, they can fill in fine details. But they are too slow to efficiently process large images or videos. AI models can do the job blazingly fast, a capability with important industrial applications. ... AI will increasingly be used in tasks that humans can do well in one place at a time, but that AI can do in millions of places simultaneously. A familiar example is ad targeting and personalization. Human marketers can collect data and predict what types of people will respond to certain advertisements. This capability is important commercially; advertising is a trillion-dollar market globally. AI models can do this for every single product, TV show, website, and internet user. ... AI can be advantageous when it does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. 


8 steps to ensure data privacy compliance across borders

Given the conflicting and evolving nature of global privacy laws, a one-size-fits-all approach is ineffective. Instead, companies should adopt a baseline standard that can be applied globally. “We default to the strictest applicable standard,” says Kory Fong, VP of engineering at Private AI in Toronto. “Our baseline makes sure we can flexibly adapt to regional laws without starting from scratch each time a regulation changes.” ... “It’s about creating an environment where regulatory knowledge is baked into day-to-day decision making,” he says. “We regularly monitor global policy developments and involve our privacy experts early in the planning process so we’re prepared, not just reactive.” Alex Spokoiny, CIO at Israel’s Check Point Software Technologies, says to stay ahead of emerging regulations, his company has moved away from rigid policies to a much more flexible, risk-aware approach. “The key is staying close to what data we collect, where it flows, and how it’s used so we can adjust quickly when new rules come up,” he says. ... Effective data privacy management requires a multidisciplinary approach, involving IT, legal, compliance, and product teams. “Cross-functional collaboration is built into our steering teams,” says Lexmark’s Willett. “Over the years, we’ve fundamentally transformed our approach to data governance by establishing the Enterprise Data Governance and Ethics community.”


Leading without titles: The rise of influence-driven leadership

Leadership isn’t about being in charge—it’s about showing up when it matters, listening when it's hardest, and holding space when others need it most. It’s not about corner offices or formal titles—it’s about quiet strength, humility, and the courage to uplift. The leaders who will shape the future are not defined by their job descriptions, but by how they make others feel—especially in moments of uncertainty. The associate who lifts a teammate’s spirits, the manager who creates psychological safety, the engineer who ensures quieter voices are heard—these are the ones redefining leadership through compassion, not control. As Simon Sinek reminds us, "Leadership is not about being in charge. It is about taking care of those in your charge." Real leadership leaves people better than it found them. It inspires not by authority, but by action. It earns loyalty not through power, but through presence. According to Gartner (2024), 74% of employees are more likely to stay in organisations where leadership is approachable, transparent, and grounded in shared values—not status. Let’s recognise these leaders. Let’s build cultures that reward empathy, connection, and quiet courage. Because true leadership makes people feel seen—not small.

Daily Tech Digest - June 18, 2025


Quote for the day:

"Build your own dreams, or someone else will hire you to build theirs." -- Farrah Gray



Agentic AI adoption in application security sees cautious growth

The study highlights a considerable proportion of the market preparing for broader adoption, with nearly 50% of respondents planning to integrate agentic AI tools within the next year. The incremental approach taken by organisations reflects a degree of caution, particularly around the concept of granting AI systems the autonomy to make decisions independently.  ... The survey results illustrate the impact agentic AI could have on software development pipelines. Thirty percent of respondents believe integrating agentic AI into continuous integration and continuous deployment (CI/CD) pipelines would significantly enhance the process. The increased speed and frequency of code deployment-termed "vibe coding" in industry parlance-has led to faster development cycles. This acceleration does not necessarily alter the ratio of application security personnel to developers, but it can create the impression of a widening gap, with security teams struggling to keep up. ... Key findings from the survey reveal varied perceptions on the utility of agentic AI for security teams. Forty-four percent of those surveyed believe agentic AI's greatest benefit lies in supporting the identification, prioritisation, and remediation of vulnerabilities. 


Why Conventional Disaster Recovery Won’t Save You from Ransomware

Cyber incident recovery planning means taking measures that mitigate the unique challenges of ransomware recovery, such as: Immutable, offsite backups. These backups are stored offsite to minimise the risk that threat actors will be able to destroy backup data. While clean-room recovery environments serve as a secondary environment where workloads can be spun back up following a ransomware attack. This makes it possible to keep the original environment intact for forensics purposes while still performing rapid recovery. Finally, to avoid replicating the malware that led to the ransomware breach, cyber incident recovery must include a process for finding and extricating malware from backups prior to recovery. The unpredictable nature of ransomware attacks means that cyber incident recovery operations must be flexible enough to enable a nimble reaction to unexpected circumstances, like redeploying individual applications instead of simply replicating an entire server image if the server was compromised but the apps were not. ... Maintaining these capabilities can be challenging, even for organisations with extensive IT resources. In addition to the operational complexity of having to manage a secondary, clean-room recovery site and formulate intricate ransomware recovery plans, it’s costly to acquire and maintain the infrastructure necessary to ensure successful recovery.


Cybersecurity takes a big hit in new Trump executive order

Specific orders Trump dropped or relaxed included ones mandating (1) federal agencies and contractors adopt products with quantum-safe encryption as they become available in the marketplace, (2) a stringent Secure Software Development Framework (SSDF) for software and services used by federal agencies and contractors, (3) the adoption of phishing-resistant regimens such as the WebAuthn standard for logging into networks used by contractors and agencies, (4) the implementation new tools for securing Internet routing through the Border Gateway Protocol, and (5) the encouragement of digital forms of identity. ... Critics said the change will allow government contractors to skirt directives that would require them to proactively fix the types of security vulnerabilities that enabled the SolarWinds compromise. "That will allow folks to checkbox their way through 'we copied the implementation' without actually following the spirit of the security controls in SP 800-218," Jake Williams, a former hacker for the National Security Agency who is now VP of research and development for cybersecurity firm Hunter Strategy, said in an interview. "Very few organizations actually comply with the provisions in SP 800-218 because they put some onerous security requirements on development environments, which are usually [like the] Wild West."


Mitigating AI Threats: Bridging the Gap Between AI and Legacy Security

AI systems, particularly those with adaptive or agentic capabilities, evolve dynamically, unlike static legacy tools built for deterministic environments. This inconsistency renders systems vulnerable to AI-focused attacks, such as data poisoning, prompt injection, model theft, and agentic subversion—attacks that often evade traditional defenses. Legacy tools struggle to detect these attacks because they don’t followpredictable patterns, requiring more adaptive, AI-specific security solutions. Human flaws and behavior only worsen these weaknesses; insider attacks, social engineering, and insecure interactions with AI systems leave organizations vulnerable to exploitation. ... AI security frameworks like NIST’s AI Risk Management Framework incorporate human risk management to ensure that AI security practices align with organizational policies. Also modeled on the fundamental C.I.A. triad, the “manage” phase specifically includes employee training to uphold AI security principles across teams. For effective use of these frameworks, cross-departmental coordination is required. There needs to be collaboration among security staff, data scientists, and human resource practitioners to formulate plans that ensure AI systems are protected while encouraging their responsible and ethical use.


Modernizing your approach to governance, risk and compliance

Historically, companies treated GRC as an obligation to meet–and if legacy solutions were effective enough in meeting GRC requirements, organizations struggled to make a case for modernization. A better way to think about GRC is a means of maximizing the value for your company by tying out those efforts to unlock revenue and increased customer trust, and not simply by reducing risks, passing audits, and staying compliant. GRC modernization can open the door to a host of other benefits, such as increased velocity of operations and an enhanced team member (both GRC team members and internal control / risk owners alike) experience. For instance, for businesses that need to demonstrate compliance to customers as part of third-party or vendor risk management initiatives, the ability to collect evidence and share it with clients faster isn’t just a step toward risk mitigation. These efforts also help close more deals and speed up deal cycle time and velocity. When you view GRC as an enabler of business value rather than a mere obligation, the value of GRC modernization comes into much clearer focus. This vision is what businesses should embrace as they seek to move away from legacy GRC strategies that don’t waste time and resources, but fundamentally reduce their ability to stay competitive.


What is Cyberespionage? A Detailed Overview

Cyber espionage involves the unauthorized access to confidential information, typically to gain strategic, political, or financial advantage. This form of espionage is rooted in the digital world and is often carried out by state-sponsored actors or independent hackers. These attackers infiltrate computer systems, networks, or devices to steal sensitive data. Unlike cyber attacks, which primarily target financial gain, cyber espionage is focused on intelligence gathering, often targeting government agencies, military entities, corporations, and research institutions. ... One of the primary goals of cyber espionage is to illegally access trade secrets, patents, blueprints, and proprietary technologies. Attackers—often backed by foreign companies or governments—aim to acquire innovations without investing in research and development. Such breaches can severely damage a competitor’s advantage, leading to billions in lost revenue and undermining future innovation. ... Governments and other organizations often use cyber espionage to gather intelligence on rival nations or political opponents. Cyber spies may breach government networks or intercept communications to secretly access sensitive details about diplomatic negotiations, policy plans, or internal strategies, ultimately gaining a strategic edge in political affairs.


European Commission Urged to Revoke UK Data Adequacy Decision Due to Privacy Concerns

The items in question include sweeping new exemptions that allow law enforcement and government agencies to access personal data, loosening of regulations governing automated decision-making, weakening restrictions on data transfers to “third countries” that are otherwise considered inadequate by the EU, and increasing the possible ways in which the UK government would have power to interfere with the regular work of the UK Data Protection Authority. EDRi also cites the UK Border Security, Asylum and Immigration Bill as a threat to data adequacy, which has passed the House of Commons and is currently before the House of Lords. The bill’s terms would broaden intelligence agency access to customs and border control data, and exempt law enforcement agencies from UK GDPR terms. It also cites the UK’s Public Authorities (Fraud, Error and Recovery) Bill, currently scheduled to go before the House of Lords for review, which would allow UK ministers to order that bank account information be made available without demonstrating suspicion of wrongdoing. The civil society group also indicates that the UK ICO would likely become less independent under the terms of the UK Data Bill, which would give the UK government expanded ability to hire, dismiss and adjust the compensation of all of its board members.


NIST flags rising cybersecurity challenges as IT and OT systems increasingly converge through IoT integration

Connectivity can introduce significant challenges for organizations attempting to apply cybersecurity controls to OT and certain IoT products. OT equipment may use modern networking technologies like Ethernet or Wi-Fi, but is often not designed to connect to the internet. In many cases, OT and IoT systems prioritize trustworthiness aspects such as safety, resiliency, availability, and cybersecurity differently than traditional IT equipment, which can complicate control implementation. While IoT devices can sometimes replace OT equipment, they often introduce different or significantly expanded functionality that organizations must carefully evaluate before moving forward with replacement. Organizations should consider how other aspects of trustworthiness, such as safety, privacy, and resiliency, factor into their approach to cybersecurity. It is also important to address how they will manage the differences in expected service life between IT, OT, and IoT systems and their components. The agency identified that federal agencies are actively deploying IoT technologies to enhance connectivity, security, environmental monitoring, transportation, healthcare, and industrial automation.


How Organizations Can Cross the Operational Chasm

A fundamental shift in operational capability is reshaping the competitive landscape, creating a clear distinction between market leaders and laggards. This growing divide isn’t merely about technological adoption — it represents a strategic inflection point that directly affects market position, customer retention and shareholder value. ... The message is clear: Organizations must bridge this divide to remain competitive. Crossing this chasm requires more than incremental improvements. It demands a fundamental transformation in operational approach, embracing AI and automation to build the resilience necessary for today’s digital landscape. ... Digital operations resiliency is a proactive approach to safeguarding critical business services by reducing downtime and ensuring seamless customer experiences. It focuses on minimizing operational disruptions, protecting brand reputation and mitigating business risk through standardized incident management, automation and compliance with service-level agreements (SLAs). Real-time issue resolution, efficient workflows and continuous improvement are put into place to ensure operational efficiency at scale, helping to provide uninterrupted service delivery. 


7 trends shaping digital transformation in 2025 - and AI looms large

Poor integration is the common theme behind all these challenges. If agents are unable to access the data and capabilities they need to understand user queries, find a solution, and resolve these issues for them, their impact is severely limited. As many as 95% of IT leaders claim integration issues are a key factor that impedes AI adoption. ... The surge in demand for AI capabilities will exacerbate the problem of API and agent sprawl, which occurs when different teams and departments build integrations and automations without any centralized management or coordination. Already, an estimated quarter of APIs are ungoverned. Three-fifths of IT and security practitioners said their organizations had at least one data breach due to API exploitation, according to a 2023 study from the Ponemon Institute and Traceable. ... Robotic process automation (RPA) is already helping organizations enhance efficiency, cut operational costs, and reduce manual toil by up to two hours for each employee every week in the IT department alone. These benefits have driven a growing interest in RPA. In fact, we could see near-universal adoption of the technology by 2028, according to Deloitte. In 2025, organizations will evolve their use of RPA technology to reduce the need for humans at every stage of the operational process. 

Daily Tech Digest - June 17, 2025


Quote for the day:

"Next generation leaders are those who would rather challenge what needs to change and pay the price than remain silent and die on the inside." -- Andy Stanley



Understanding how data fabric enhances data security and governance

“The biggest challenge is fragmentation; most enterprises operate across multiple cloud environments, each with its own security model, making unified governance incredibly complex,” Dipankar Sengupta, CEO of Digital Engineering Services at Sutherland Global told InfoWorld. ... Shadow IT is also a persistent threat and challenge. According to Sengupta, some enterprises discover nearly 40% of their data exists outside governed environments. Proactively discovering and onboarding those data sources has become non-negotiable. ... A data fabric deepens organizations’ understanding and control of their data and consumption patterns. “With this deeper understanding, organizations can easily detect sensitive data and workloads in potential violation of GDPR, CCPA, HIPAA and similar regulations,” Calvesbert commented. “With deeper control, organizations can then apply the necessary data governance and security measures in near real time to remain compliant.” ... Data security and governance inside a data fabric shouldn’t just be about controlling access to data, it should also come with some form of data validation. The cliched saying “garbage-in, garbage-out” is all too true when it comes to data. After all, what’s the point of ensuring security and governance on data that isn’t valid in the first place?


AI isn’t taking your job; the big threat is a growing skills gap

While AI can boost productivity by handling routine tasks, it can’t replace the strategic roles filled by skilled professionals, Vianello said. To avoid those kinds of issues, agencies — just like companies — need to invest in adaptable, mission-ready teams with continuously updated skills in cloud, cyber, and AI. The technology, he said, should augment – not replace — human teams, automating repetitive tasks while enhancing strategic work. Success in high-demand tech careers starts with in-demand certifications, real-world experience, and soft skills. Ultimately, high-performing teams are built through agile, continuous training that evolves with the tech, Vianello said. “We train teams to use AI platforms like Copilot, Claude and ChatGPT to accelerate productivity,” Vianello said. “But we don’t stop at tools; we build ‘human-in-the-loop’ systems where AI augments decision-making and humans maintain oversight. That’s how you scale trust, performance, and ethics in parallel.” High-performing teams aren’t born with AI expertise; they’re built through continuous, role-specific, forward-looking education, he said, adding that preparing a workforce for AI is not about “chasing” the next hottest skill. “It’s about building a training engine that adapts as fast as technology evolves,” he said.


Got a new password manager? Don't leave your old logins exposed in the cloud - do this next

Those built-in utilities might have been good enough for an earlier era, but they aren't good enough for our complex, multi-platform world. For most people, the correct option is to switch to a third-party password manager and shut down all those built-in password features in the browsers and mobile devices you use. Why? Third-party password managers are built to work everywhere, with a full set of features that are the same (or nearly so) across every device. After you make that switch, the passwords you saved previously are left behind in a cloud service you no longer use. If you regularly switch between browsers (Chrome on your Mac or Windows PC, Safari on your iPhone), you might even have multiple sets of saved passwords scattered across multiple clouds. It's time to clean up that mess. If you're no longer using a password manager, it's prudent to track down those outdated saved passwords and delete them from the cloud. I've studied each of the four leading browsers: Google Chrome, Apple's Safari, Microsoft Edge, and Mozilla Firefox. Here's how to find the password management settings for each one, export any saved passwords to a safe place, and then turn off the feature. As a final step, I explain how to purge saved passwords and stop syncing.


AI and technical debt: A Computer Weekly Downtime Upload podcast

Given that GenAI technology hit the mainstream with GPT 4 two years ago, Reed says: “It was like nothing ever before.” And while the word “transformational” tends to be generously overused in technology he describes generative AI as “transformational with a capital T.” But transformations are not instant and businesses need to understand how to apply GenAI most effectively, and figure out where it does and does not work well. “Every time you hear anything with generative AI, you hear the word journey and we're no different,” he says. “We are trying to understand it. We're trying to understand its capabilities and understand our place with generative AI,” Reed adds. Early adopters are keen to understand how to use GenAI in day-to-day work, which, he says, can range from being an AI-based work assistant or a tool that changes the way people search for information to using AI as a gateway to the heavy lifting required in many organisations. He points out that bet365 is no different. “We have a sliding scale of ambition, but obviously like anything we do in an organisation of this size, it must be measured, it must be understood and we do need to be very, very clear what we're using generative AI for.” One of the very clear use cases for GenAI is in software development. 


Cloud Exodus: When to Know It's Time to Repatriate Your Workloads

Because of the inherent scalability of cloud resources, the cloud makes a lot of sense when the compute, storage, and other resources your business needs fluctuate constantly in volume. But if you find that your resource consumption is virtually unchanged from month to month or year to year, you may not need the cloud. You may be able to spend less and enjoy more control by deploying on-prem infrastructure. ... Cloud costs will naturally fluctuate over time due to changes in resource consumption levels. It's normal if cost increases correlate with usage increases. What's concerning, however, is a spike in cloud costs that you can't tie to consumption changes. It's likely in that case that you're spending more either because your cloud service provider raised its prices or your cloud environment is not optimized from a cost perspective. ... You can reduce latency (meaning the delay between when a user requests data on the network and when it arrives) on cloud platforms by choosing cloud regions that are geographically proximate to your end users. But that only works if your users are concentrated in certain areas, and if cloud data centers are available close to them. If this is not the case, you are likely to run into latency issues, which could dampen the user experience you deliver. 


The future of data center networking and processing

The optical-to-electrical conversion that is performed by the optical transceiver is still needed in a CPO system, but it moves from a pluggable module located at the faceplate of the switching equipment to a small chip (or chiplet) that is co-packaged very closely to the target ICs inside the box. Data center chipset heavyweights Broadcom and Nvidia have both announced CPO-based data center networking products operating at 51.2 and 102.4 Tb/s. ... Early generation CPO systems, such as those announced by Broadcom and Nvidia for Ethernet switching, make use of high channel count fiber array units (FAUs) that are designed to precisely align the fiber cores to their corresponding waveguides inside the PICs. These FAUs are challenging to make as they require high fiber counts, mixed single-mode (SM) and polarization maintaining (PM) fibers, integration of micro-optic components depending on the fiber-to-chip coupling mechanism, highly precise tolerance alignments, CPO-optimized fibers and multiple connector assemblies.  ... In addition to scale and cost benefits, extreme densities can be achieved at the edge of the PIC by bringing the waveguides very close together, down to about 30µm, which is far more than what can be achieved with even the thinnest fibers. Next generation fiber-to-chip coupling will enable GPU optics – which will require unprecedented levels of density and scale.


Align AI with Data, Analytics and Governance to Drive Intelligent, Adaptive Decisions and Actions Across the Organisation

Unlocking AI’s full business potential requires building executive AI literacy. They must be educated on AI opportunities, risks and costs to make effective, future-ready decisions on AI investments that accelerate organisational outcomes. Gartner recommends D&A leaders introduce experiential upskilling programs for executives, such as developing domain-specific prototypes to make AI tangible. This will lead to greater and more appropriate investment in AI capabilities. ... Using synthetic data to train AI models is now a critical strategy for enhancing privacy and generating diverse datasets. However, complexities arise from the need to ensure synthetic data accurately represents real-world scenarios, scales effectively to meet growing data demand and integrates seamlessly with existing data pipelines and systems. “To manage these risks, organisations need effective metadata management,” said Idoine. “Metadata provides the context, lineage and governance needed to track, verify and manage synthetic data responsibly, which is essential to maintaining AI accuracy and meeting compliance standards.” ... Building GenAI models in-house offers flexibility, control and long-term value that many packaged tools cannot match. As internal capabilities grow, Gartner recommends organisations adopt a clear framework for build versus buy decisions. 


Do microServices' Benefits Supersede Their caveats? A Conversation With Sam Newman

A microservice is one of those where it is independently deployable so I can make a change to it and I can roll out new versions of it without having to change any other part of my system. So things like avoiding shared databases are really about achieving that independent deployability. And it's a really simple idea that can be quite easy to implement if you know about it from the beginning. It can be difficult to implement if you're already in a tangled mess. And that idea of independent deployability has interesting benefits because the fact that something is independently deployable is obviously useful because it's low impact releases, but there's loads of other benefits that start to flow from that. ... The vast majority of people who tell me they've scaling issues often don't have them. They could solve their scaling issues with a monolith, no problem at all, and it would be a more straightforward solution. They're typically organizational scale issues. And so, for me, what the world needs from our IT's product-focused, outcome-oriented, and more autonomous teams. That's what we need, and microservices are an enabler for that. Having things like team topologies, which of course, although the DevOps topology stuff was happening around the time of my first edition of my book, that being kind of moved into the team topology space by Matthew and Manuel around the second edition again sort of helps kind of crystallize a lot of those concepts as well.


Why Businesses Must Upgrade to an AI-First Connected GRC System

Adopting a connected GRC solution enables organizations to move beyond siloed operations by bringing risk and compliance functions onto a single, integrated platform. It also creates a unified view of risks and controls across departments, bringing better workflows and encouraging collaboration. With centralized data and shared visibility, managing complex, interconnected risks becomes far more efficient and proactive. In fact, this shift toward integration reflects a broader trend that is seen in the India Regulatory Technology Business Report 2024–2029 findings, which highlight the growing adoption of compliance automation, AI, and machine learning in the Indian market. The report points to a future where GRC is driven by data, merging operations, technology, and control into a single, intelligent framework. ... An AI-first, connected GRC solution takes the heavy lifting out of compliance. Instead of juggling disconnected systems and endless updates, it brings everything together, from tracking regulations to automating actions to keeping teams aligned. For compliance teams, that means less manual work and more time to focus on what matters. ... A smart, integrated GRC solution brings everything into one place. It helps organizations run more smoothly by reducing errors and simplifying teamwork. It also means less time spent on admin and better use of people and resources where they are really needed.


The Importance of Information Sharing to Achieve Cybersecurity Resilience

Information sharing among different sectors predominantly revolves around threats related to phishing, vulnerabilities, ransomware, and data breaches. Each sector tailors its approach to cybersecurity information sharing based on regulatory and technological needs, carefully considering strategies that address specific risks and identify resolution requirements. However, for the mobile industry, information sharing relating to cyberattacks on the networks themselves and misuse of interconnection signalling are also the focus of significant sharing efforts. Industries learn from each other by adopting sector-specific frameworks and leveraging real-time data to enhance their cybersecurity posture. This includes real-time sharing of indicators of compromise (IoCs) and the techniques, tactics, and procedures (TTPs) associated with phishing campaigns. An example of this is the recently launched Stop Scams UK initiative, bringing together tech, telecoms and finance industry leaders, who are going to share real-time data on fraud indicators to enhance consumer protection and foster economic security. This is an important development, as without cross-industry information sharing, determining whether a cybersecurity attack campaign is sector-specific or indiscriminate becomes difficult. 

Daily Tech Digest - June 16, 2025


Quote for the day:

"A boss has the title, a leader has the people." -- Simon Sinek


How CIOs are getting data right for AI

Organizations that have taken steps to better organize their data are more likely to possess data maturity, a key attribute of companies that succeed with AI. Research firm IDC defines data maturity as the use of advanced data quality, cataloging and metadata, and data governance processes. The research firm’s Office of the CDO Survey finds firms with data maturity are far more likely than other organizations to have generative AI solutions in production. ... “We have to be mindful of what we put into public data sets,” says Yunger. With that caution in mind, Servier has built a private version of ChatGPT on Microsoft Azure to ensure that teams benefit from access to AI tools while protecting proprietary information and maintaining confidentiality. The gen AI implementation is used to speed the creation of internal documents and emails, Yunger says. In addition, personal data that might crop up in pharmaceutical trials must be treated with the utmost caution to comply with the European Union’s AI Act,  ... To achieve what he calls “sustainable AI,” AES’s Reyes counsels the need to strike a delicate balance: implementing data governance, but in a way that does not disrupt work patterns. He advises making sure everyone at your company understands that data must be treated as a valuable asset: With the high stakes of AI in play, there is a strong reason it must be accurately cataloged and managed.


Alan Turing Institute reveals digital identity and DPI risks in Cyber Threats Observatory Workshop

The trend indicates that threat actors could be targeting identity mechanisms such as authentication, session management, and role-based access systems. The policy implication for governments translates to a need for more detailed cyber incident reporting across all critical sectors, the institute recommends. An issue is the “weakest link” problem. A well-resourced sector like finance might invest in strong security, but their dependence on, say, a national ID system means they are still vulnerable if that ID system is weak. The institute believes this calls for viewing DPI security as a public good. Improvements in one sector’s security, such as “hardened” digital ID protocols, could benefit other sectors’ security. Integrating security and development teams is recommended as is promoting a culture of shared cyber responsibility. Digital ID, government, healthcare, and finance must advance together on the cybersecurity maturity curve, the report says, as a weakness in one can undermine the public’s trust in all. The report also classifies CVEs by attack vectors: Network, Local, Adjacent Network, and Physical. Remote Network threats were dominant, particularly affecting finance and digital identity platforms. But Local and Physical attack surfaces, especially in health and government, are increasingly relevant due to on-premise systems and biometric interfaces, according to the Cyber Threat Observatory.


The Advantages Of Machine Learning For Large Restaurant Chains

Machine learning can not only assist in the present activities but contribute to steering long-term planning and development. Decision-makers can use these trends to notice opportunities to explore new markets, develop new products, or redistribute resources when they discover the patterns across the different locations, customer groups, and product categories. These insights dig deeper into the superficial data and reveal trends that might not have been apparent by just manual analysis. The capability to make data-driven decisions becomes even more significant with the growth of restaurant chains. Machine learning tools provide scalable insights that can be applied in parallel with the rest of the business objectives when combined with other technologies like a drive thru system or cloud-based analytics platforms. The opening of a new venue or the optimizing of an advertisement campaign, machine learning enables the management levels to have the information needed to make a decision with assured confidence and competence. ... Machine learning is transforming how major restaurant chains run their business, providing an unbeatable mix of accuracy, speed, and flexibility over their older equivalents. 


How Staff+ Engineers Can Develop Strategic Thinking

For risk and innovation, you need to understand what your organization values the most. Everybody has a culture memo and a set of tenets they follow, but these are part of unsaid rules, something that every new hire will learn by the first week of their onboarding, which is not written out loud and clear. In my experience, there are different kinds of organizations. Some care about execution, like results above everything, top line, bottom line. Others care about data-driven decision-making, customer sentiment, and keeping adapting. There are others who care about storytelling and relationships. What does this really mean? If you fail to influence, if you fail to tell a story about what ideas you have, what you're really trying to do, to build trust and relationships, you may not succeed in that environment, because it's not enough for you to be smart and knowing it all. You also need to know how to convey your ideas and influence people. When you talk about innovation, there are companies that really pride themselves on experimentation, staying ahead of the curve. You can look at this by how many of them have an R&D department, and how much funding they put into that. Then, what's their role in the open-source community, and how much they contribute towards it.


Legal and Policy Responses to Spyware: A Primer

There have been a number of international efforts to combat at least some aspects of the harms of commercial spyware. These include the US-led Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware and the Pall Mall Process, an ongoing multistakeholder undertaking focussed on this issue. So far, principles, norms, and calls for businesses to comply with the United Nations Guiding Principles on Business and Human Rights (UNGPs) have emerged, and Costa Rica has called for a full moratorium, but no well-orchestrated international action has been fully brought to fruition. However, private companies and individuals, regulators, and national or regional governments have taken action, employing a wide range of legal and regulatory tools. Guidelines and proposals have also been articulated by governmental and non-governmental organizations, but we will focus here on measures that are existent and, at least in theory, enforceable. While some attempts at combating spyware, like WhatsApp’s, have been effective, others have not. Analyzing the strengths and weaknesses of each approach is beyond the scope of this article, and, considering the international nature of spyware, what fails in one jurisdiction may be successful in another.


Red Teaming AI: The Build Vs Buy Debate

In order to red team your AI model, you need to have a deep understanding of the system you are protecting. Today’s models are complex multimodal, multilingual systems. One model might take in text, images, code, and speech with any single input having the potential to break something. Attackers know this and can easily take advantage. For example, a QR code might contain an obfuscated prompt injection or a roleplay conversation might lead to ethical bypasses. This isn’t just about keywords, but about understanding how intent hides beneath layers of tokens, characters, and context. The attack surface isn’t just large, it’s effectively infinite. ... Building versus buying is an age-old debate. Fortunately, the AI security space is maturing rapidly, and organizations have a lot of choices to implement from. After you have some time to evaluate your own criteria against Microsoft, OWASP and NIST frameworks, you should have a good idea of what your biggest risks are and key success criteria. After considering risk mitigation strategies, and assuming you want to keep AI turned on, there are some open-source deployment options like Promptfoo and Llama Guard, which provide useful scaffolding for evaluating model safety. Paid platforms like Lakera, Knostic, Robust Intelligence, Noma, and Aim are pushing the edge on real-time, content-aware security for AI, each offering slightly different tradeoffs in how they offer protection. 


The Impact of Quantum Decryption

There are two key quantum mechanical phenomena, superposition and entanglement, that enable qubits to operate fundamentally differently than classical bits. Superposition allows a qubit to exist in a probabilistic combination of both 0 and 1 states simultaneously, significantly increasing the amount of information a small number of qubits can hold.  ... Quantum decryption of data stolen using current standards could have pervasive impacts. Government secrets, more long-term data, and intellectual property remain at significant risk even if decrypted years after a breach. Decrypted government communications, documents, or military strategies could compromise national security. An organization’s competitive advantage could be undermined by trade secrets being exposed. Meanwhile, data such as credit card information will diminish over time due to expiration dates and the issuance of new cards. ... For organizations, the ability of quantum computers to decrypt previously stolen data could result in substantial financial losses due to data breaches, corporate espionage, and potential legal liabilities. The exposure of sensitive corporate information, such as trade secrets and strategic plans, could provide competitors with an unfair advantage, leading to significant financial harm. 


Don't let a crisis of confidence derail your data strategy

In an age of AI, the options that range from on-premise facilities to colocation, or public, private and hybrid clouds, are business-critical decisions. These decisions are so important because such choices impact the compliance, cost efficiency, scalability, security, and agility that can make or break a business. In the face of such high stakes, it is hardly surprising that confidence is the battleground on which deals for digital infrastructure are fought. ... Commercially, Total Cost of Ownership (TCO) has become another key factor. Public cloud was heavily promoted on the basis of lower upfront costs. However, businesses have seen the "pay-as-you-go" model lead to escalating operational expenses. In contrast, businesses have seen the cost of colocation and private cloud become more predictable and attractive for long-term investment. Some reports suggest that at scale, colocation can offer significant savings over public cloud, while private cloud can also reduce costs by eliminating hardware procurement and management. Another shift in confidence has been that public cloud no longer guarantees the easiest path to growth. Public cloud has traditionally excelled in rapid, on-demand scalability. This agility was a key driver for adoption, as businesses sought to expand quickly.


The Anti-Metrics Era of Developer Productivity

The need to measure everything truly spiked during COVID when we started working remotely, and there wasn’t a good way to understand how work was done. Part of this also stemmed from management’s insecurities about understanding what’s going on in software engineering. However, when surveyed about the usefulness of developer productivity metrics, most leaders admit that the metrics they track are not representative of developer productivity and tend to conflate productivity with experience. And now that most of the code is written by AI, measuring productivity the same way makes even less sense. If AI improves programming effort by 30%, does that mean we get 30% more productivity?” ... Whether you call it DevEx or platform engineering, the lack of friction equals happy developers, which equals productive developers. In the same survey, 63% of developers said developer experience is important for their job satisfaction. ... Instead of building shiny dashboards, engineering leads should focus on developer experience and automated workflows across the entire software development life cycle: development, code reviews, builds, tests and deployments. This means focusing on solving real developer problems instead of just pointing at the problems.


Why banks’ tech-first approach leaves governance gaps

Integration begins with governance. When cybersecurity is properly embedded in enterprise-wide governance and risk management, security leaders are naturally included in key forums, including strategy discussions, product development, and M&A decision making. Once at the table, the cybersecurity team must engage productively. They must identify risks, communicate them in business terms AND collaborate with the business to develop solutions that enable business goals while operating within defined risk appetites. The goal is to make the business successful, in a safe and secure manner. Cyber teams that focus solely on highlighting problems risk being sidelined. Leaders must ensure their teams are structured and resourced to support business goals, with appropriate roles and encouragement of creative risk mitigation approaches. ... Start by ensuring there is a regulatory management function that actively tracks and analyzes emerging requirements. These updates should be integrated into the enterprise risk management (ERM) framework and governance processes—not handled in isolation. They should be treated no differently than any other new business initiatives. ... Ultimately, aligning cyber governance with regulatory change requires cross-functional collaboration, early engagement, and integration into strategic risk processes, not just technical or compliance checklists.

Daily Tech Digest - June 15, 2025


Quote for the day:

“Whenever you find yourself on the side of the majority, it is time to pause and reflect.” -- Mark Twain



Gazing into the future of eye contact

Eye contact is a human need. But it also offers big business benefits. Brain scans show that eye contact activates parts of the brain linked to reading others’ feelings and intentions, including the fusiform gyrus, medial prefrontal cortex, and amygdala. These brain regions help people figure out what others are thinking or feeling, which we all need for trusting business and work relationships. ... If you look into the camera to simulate eye contact, you can’t see the other person’s face or reactions. This means both people always appear to be looking away, even if they are trying to pay attention. It is not just awkward — it changes how people feel and behave. ... The iContact Camera Pro is a 4K webcam that uses a retractable arm that places the camera right in your line of sight so that you can look at the person and the camera at the same time. It lets you adjust video and audio settings in real time. It’s compact and folds away when not in use. It’s also easy to set up with a USB-C connection and works with Zoom, Microsoft Teams, Google Meet, and other major platforms. ... Finally, there’s Casablanca AI, software that fixes your gaze in real time during video calls, so it looks like you’re making eye contact even when you’re not. It works by using AI and GAN technology to adjust both your eyes and head angle, keeping your facial expressions and gestures natural, according to the company.


New York passes a bill to prevent AI-fueled disasters

“The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” said Senator Gounardes. “The people that know [AI] the best say that these risks are incredibly likely […] That’s alarming.” The RAISE Act is now headed for New York Governor Kathy Hochul’s desk, where she could either sign the bill into law, send it back for amendments, or veto it altogether. If signed into law, New York’s AI safety bill would require the world’s largest AI labs to publish thorough safety and security reports on their frontier AI models. The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. If tech companies fail to live up to these standards, the RAISE Act empowers New York’s attorney general to bring civil penalties of up to $30 million. The RAISE Act aims to narrowly regulate the world’s largest companies — whether they’re based in California (like OpenAI and Google) or China (like DeepSeek and Alibaba). The bill’s transparency requirements apply to companies whose AI models were trained using more than $100 million in computing resources (seemingly, more than any AI model available today), and are being made available to New York residents.


The ZTNA Blind Spot: Why Unmanaged Devices Threaten Your Hybrid Workforce

The risks are well-documented and growing. But many of the traditional approaches to securing these endpoints fall short—adding complexity without truly mitigating the threat. It’s time to rethink how we extend Zero Trust to every user, regardless of who owns the device they use. ... The challenge of unmanaged endpoints is no longer theoretical. In the modern enterprise, consultants, contractors, and partners are integral to getting work done—and they often need immediate access to internal systems and sensitive data. BYOD scenarios are equally common. Executives check dashboards from personal tablets, marketers access cloud apps from home desktops, and employees work on personal laptops while traveling. In each case, IT has little to no visibility or control over the device’s security posture. ... To truly solve the BYOD and contractor problem, enterprises need a comprehensive ZTNA solution that applies to all users and all devices under a single policy framework. The foundation of this approach is simple: trust no one, verify everything, and enforce policies consistently. ... The shift to hybrid work is permanent. That means BYOD and third-party access are not exceptions—they’re standard operating procedures. It’s time for enterprises to stop treating unmanaged devices as an edge case and start securing them as part of a unified Zero Trust strategy.


3 reasons I'll never trust an SSD for long-term data storage

SSDs rely on NAND flash memory, which inevitably wears out after a finite number of write cycles. Every time you write data to an SSD and erase it, you use up one write cycle. Most manufacturers specify the write endurance for their SSDs, which is usually in terabytes written (TBW). ... When I first started using SSDs, I was under the impression that I could just leave them on the shelf for a few years and access all my data whenever I wanted. But unfortunately, that's not how NAND flash memory works. The data stored in each cell leaks over time; the electric charge used to represent a bit can degrade, and if you don't power on the drive periodically to refresh the NAND cells, those bits can become unreadable. This is called charge leakage, and it gets worse with SSDs using lower-end NAND flash memory. Most consumer SSDs these days use TLC and QLC NAND flash memory, which aren't as great as SLC and MLC SSDs at data retention. ... A sudden power loss during critical write operations can corrupt data blocks and make recovery impossible. That's because SSDs often utilize complex caching mechanisms and intricate wear-leveling algorithms to optimize performance. During an abrupt shutdown, these processes might fail to complete correctly, leaving your data corrupted.


Beyond the Paycheck: Where IT Operations Careers Outshine Software Development

On the whole, working in IT tends to be more dynamic than working as a software developer. As a developer, you're likely to spend the bulk of your time writing code using a specific set of programming languages and frameworks. Your day-to-day, month-to-month, and year-to-year work will center on churning out never-ending streams of application updates. The tasks that fall to IT engineers, in contrast, tend to be more varied. You might troubleshoot a server failure one day and set up a RAID array the next. You might spend part of your day interfacing with end users, then go into strategic planning meetings with executives. ... IT engineers tend to be less abstracted from end users, with whom they often interact on a daily basis. In contrast, software engineers are more likely to spend their time writing code while rarely, if ever, watching someone use the software they produce. As a result, it can be easier in a certain respect for someone working in IT as compared to software development to feel a sense of satisfaction.  ... While software engineers can move into adjacent types of roles, like site reliability engineering, IT operations engineers arguably have a more diverse set of easily pursuable options if they want to move up and out of IT operations work.


Europe is caught in a cloud dilemma

The European Union is worried about its reliance on the leading US-based cloud providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). These large-scale players hold an unrivaled influence over the cloud sector and manage vital infrastructure essential for driving economies and fostering innovation. European policymakers have raised concerns that their heavy dependence exposes the continent to vulnerabilities, constraints, and geopolitical uncertainties. ... Europe currently lacks cloud service providers that can challenge those global Goliaths. Despite efforts like Gaia-X that aim to change this, it’s not clear if Europe can catch up anytime soon. It will be a prohibitively expensive undertaking to build large-scale cloud infrastructure in Europe that is both cost-efficient and competitive. In a nutshell, Europe’s hope to adopt top-notch cloud technology without the countries that currently dominate the industry is impractical, considering current market conditions. ... Often companies view cloud integration as merely a checklist or set of choices to finalize their cloud migration. This frequently results in tangled networks and isolated silos. Instead, businesses should overhaul their existing cloud environment with a comprehensive strategy that considers both immediate needs and future goals as well as the broader geopolitical landscape.


Applying Observability to Leadership to Understand and Explain your Way of Working

Leadership observability means observing yourself as you lead. Alex Schladebeck shared at OOP conference how narrating thoughts, using mind maps, asking questions, and identifying patterns helped her as a leader to explain decisions, check bias, support others, and understand her actions and challenges. Employees and other leaders around you want to understand what leads to your decisions, Schladebeck said. ... Heuristics give us our "gut feeling". And that’s useful, but it’s better if we’re able to take a step back and get explicit about how we got to that gut feeling, Schladebeck mentioned. If we categorise and label things and explain what experiences lead us to our gut feeling, then we have the option of checking our bias and assumptions, and can help others to develop the thinking structures to make their own decisions, she explained ... Schladebeck recommends that leaders narrate their thoughts to reflect on, and describe their own work to the ones they are leading. They can do this by asking themselves questions like, "Why do I think that?", "What assumptions am I basing this on?", "What context factors am I taking into account?" Look for patterns, categories, and specific activities, she advised, and then you can try to explain these things to others around you. To visualize her thinking as a leader, Schladebeck uses mind maps.


Data Mesh: The Solution to Financial Services' Data Management Nightmare

Data mesh is not a technology or architecture, but an organizational and operational paradigm designed to scale data in complex enterprises. It promotes domain-oriented data ownership, where teams manage their data as a product, using a self-service infrastructure and following federated governance principles. In a data mesh, any team or department within an organization becomes accountable for the quality, discoverability, and accessibility of the data products they own. The concept emerged around five years ago as a response to the bottlenecks and limitations created by centralized data engineering teams acting as data gatekeepers. ... In a data mesh model, data ownership and stewardship are assigned to the business domains that generate and use the data. This means that teams such as credit risk, compliance, underwriting, or investment analysis can take responsibility for designing and maintaining the data products that meet their specific needs. ... Data mesh encourages clear definitions of data products and ownership, which helps reduce the bottlenecks often caused by fragmented data ownership or overloaded central teams. When combined with modern data technologies — such as cloud-native platforms, data virtualization layers, and orchestration tools — data mesh can help organizations connect data across legacy mainframes, on-premises databases, and cloud systems.


Accelerating Developer Velocity With Effective Platform Teams

Many platform engineering initiatives fail, not because of poor technology choices, but because they miss the most critical component: genuine collaboration. The most powerful internal developer platforms aren’t just technology stacks; they’re relationship accelerators that fundamentally transform the way teams work together. Effective platform teams have a deep understanding of what a day in the life of a developer, security engineer or operations specialist looks like. They know the pressures these teams face, their performance metrics and the challenges that frustrate them most. ... The core mission of platform teams is to enable faster software delivery by eliminating complexity and cognitive load. Put simply: Make the right way the easiest way. Developer experience extends beyond function; it’s about creating delight and demonstrating that the platform team cares about the human experience, not just technical capabilities. The best platforms craft natural, intuitive interfaces that anticipate questions and incorporate error messages that guide, rather than confuse. Platform engineering excellence comes from making complex things appear simple. It’s not about building the most sophisticated system; it’s about reducing complexity so developers can focus on creating business value.


AI agents will be ambient, but not autonomous - what that means for us

Currently, the AI assistance that users receive is deterministic; that is, humans are expected to enter a command in order to receive an intended outcome. With ambient agents, there is a shift in how humans fundamentally interact with AI to get the desired outcomes they need; the AI assistants rely instead on environmental cues. "Ambient agents we define as agents that are triggered by events, run in the background, but they are not completely autonomous," said Chase. He explains that ambient agents benefit employees by allowing them to expand their magnitude and scale themselves in ways they could not previously do. ... When talking about these types of ambient agents with advanced capabilities, it's easy to become concerned about trusting AI with your data and with executing actions of high importance. To tackle that concern, it is worth reiterating Chase's definition of ambient agents -- they're "not completely autonomous." ... "It's not deterministic," added Jokel. "It doesn't always give you the same outcome, and we can build scaffolding, but ultimately you still ant a human being sitting at the keyboard checking to make sure that this decision is the right thing to do before it gets executed, and I think we'll be in that state for a relatively long period of time."





Daily Tech Digest - June 13, 2025


Quote for the day:

"Never stop trying; Never stop believing; Never give up; Your day will come." -- Mandy Hale




Hacking the Hackers: When Bad Guys Let Their Guard Down

"For defenders, these leaks are treasure troves," says Ensar Seker, chief information security officer (CISO) at threat intelligence cybersecurity company SOCRadar. "When analyzed correctly, they offer unprecedented visibility into actor infrastructure, infection patterns, affiliate hierarchies, and even monetization tactics." The data can help threat intel teams enrich indicators of compromise (IoCs), map infrastructure faster, preempt attacks, and potentially inform law enforcement disruption efforts, he says. "Organizations should track these OpSec failures through their [cyber threat intelligence] programs," Seker advises. "When contextualized correctly, they're not just passive observations; they become active defensive levers, helping defenders move upstream in the kill chain and apply pressure directly on adversarial capabilities." External leaks — like the DanaBot leak — often ironically are rooted in the same causes that threat actors abuse to break into victim networks: misconfigurations, unpatched systems, and improper segmentation that can be exploited to gain unauthorized access. Open directories, exposed credentials, unsecured management panels, unencrypted APIs, and accidental data exposure via hosting providers are all other opportunities for external discovery and exploration, Baker says. 


Prioritising cyber resilience in a cloud-first world

The explosion of data across their multi-cloud, hybrid and on-premises environments is creating a cause for concern among global CIOs, with 86 per cent saying it is beyond the ability of humans to manage. Aware that the growing complexity of their multi-provider cloud environments exposes their critical data and puts their organisation’s business resilience at risk, these leaders need to be confident they can restore their sensitive data at speed. They also need certainty when it comes to rebuilding their cloud environment and recovering their distributed cloud applications. To achieve these goals and minimise the risk of contamination resulting from ransomware, CIOs need to ensure their organisations implement a comprehensive cyber recovery plan that prioritises the recovery of both clean data and applications and mitigates downtime. ... Data recovery is just one aspect of cyber resilience for today’s cloud-powered enterprises. Rebuilding applications is an often overlooked task that can prove a time consuming and highly complex proposition when undertaken manually. Having the capability to recover what matters the most quickly should be a tried and tested component of every cloud-first strategy. Fortunately, today’s advanced security platforms now feature automation and AI options that can facilitate this process in hours or minutes rather than days or weeks. 


Mastering Internal Influence, a Former CIO’s Perspective

Establishing and building an effective relationship with your boss is one of the most important hard skills in business. You need to consciously work with your supervisor in order to get the best results for them, your organization, and yourself. In my experience, your boss will appreciate you initiating a conversation regarding what is important to them and how you can help them be more successful. Some managers are good at communicating their expectations, but some are not. It is your job to seek to understand what your boss’s expectations are. ... You must start with the assumption that everyone reporting to you is working in good faith toward the same goals. You need to demonstrate a trusting, humble, and honest approach to doing business. As the boss, you need to be a mentor, coach, visionary, cheerleader, confidant, guide, sage, trusted partner, and perspective keeper. It also helps to have a sense of humor. It is first vital to articulate the organization’s values, set expectations, and establish mutual accountability. Then you can focus on creating a safe work ecosystem. ... You’ll begin to change the culture by establishing the values of the organization. This is an important step to ensure that everyone is on the same page and working toward the same goals. Then, you’ll need to make sure they understand what is expected of them.


The Rise of BYOC: How Data Sovereignty Is Reshaping Enterprise Cloud Strategy

BYOC allows customers to run SaaS applications using their own cloud infrastructure and resources rather than relying on a third-party vendor’s infrastructure. This framework transforms how enterprises consume cloud services by inverting the traditional vendor-customer relationship. Rather than exporting sensitive information to vendor-controlled environments, organizations maintain data custody while still receiving fully-managed services. This approach addresses a fundamental challenge in modern enterprise architecture: how to maintain operational efficiency while also ensuring complete data control and regulatory compliance. ... BYOC adoption is driven primarily by increasing regulatory complexity around data sovereignty. The article Cloud Computing Trends in 2025 notes that “data sovereignty concerns, particularly the location and legal jurisdiction of data storage, are prompting cloud providers to invest in localized data centers.” Organizations must navigate an increasingly fragmented regulatory landscape while maintaining operational consistency. And when regulations vary country by country, having data in multiple third-party networks can dramatically compound the problem of knowing which data is subject to a specific country’s regulations.


AI Will Steal Developer Jobs (But Not How You Think)

“There’s a lot of anxiety about AI and software creation in general, not necessarily just frontend or backend, but people are rightfully trying to understand what does this mean for my career,” Robinson told The New Stack. “If the current rate of improvement continues, what will that look like in 1, 2, 3, 4 years? It could be pretty significant. So it has a lot of people stepping back and evaluating what’s important to them in their career, where they want to focus.” Armando Franco also sees anxiety around AI. Franco is the director of technology modernization at TEKsystems Global Services, which employs more than 3,000 people. It’s part of TEKsystems, a large global IT services management firm that employs 80,000 IT professionals. ... This isn’t the first time in history that people have fretted about new technologies, pointed out Shreeya Deshpande, a senior analyst specializing in data and AI with the Everest Group, a global research firm. “Fears that AI will replace developers mirror historical anxieties seen during past technology shifts — and, as history shows, these fears are often misplaced,” Deshpande said in a written response to The New Stack. “AI will increasingly automate repetitive development tasks, but the developer’s role will evolve rather than disappear — shifting toward activities like AI orchestration, system-level thinking, and embedding governance and security frameworks into AI-driven workflows.”


Unpacking the security complexity of no-code development platforms

Applications generated by no-code platforms are first and foremost applications. Therefore, their exploitability is first and foremost attributed to vulnerabilities introduced by their developers. To make things worse for no-code applications, they are also jeopardized by misconfigurations of the development and deployment environments. ... Most platforms provide controls at various levels that allow white-listing / blacklisting of connectors. This makes it possible to put guardrails around the use of “standard integrations”. Keeping tabs of these lists in a dynamic environment with a large number of developers is a big challenge. Shadow APIs are even more difficult to track and manage, particularly when some of the endpoints are determined only in runtime. Most platforms do not provide granular control over the use of shadow APIs but do provide a kill switch for their use entirely. Mechanisms exist in all the platforms that allow for secure development of applications and automations if used correctly. These mechanisms that can help prevent injection vulnerabilities, traversal vulnerabilities and other types of mistakes have different levels of complexity in terms of their use by developers. Unvetted data egress is also a big problem in these environments just as it is in general enterprise environments. 


Modernizing Financial Systems: The Critical Role of Cloud-Based Microservices Optimization

Cloud-based microservices provide many benefits to financial institutions across operational efficiency, security, and technology modernization. Economically, these architectures enable faster transaction processing by reducing latency and optimizing resource allocation. They also lower infrastructure expenses by replacing monolithic legacy systems with modular, scalable services that are easier to maintain and operate. Furthermore, the shift to cloud technologies increases demand for specialized roles in cloud operations and cybersecurity. In security operations, microservices support zero-trust architectures and data encryption to reduce the risk of fraud and unauthorized access. Cloud platforms also enhance resilience by offering built-in redundancy and disaster recovery capabilities, which help ensure continuous service and maintain data integrity in the event of outages or cyber incidents. ... To build secure and scalable financial microservices, there are a few key technology stacks needed. They include Docker and Kubernetes containerization for managing multiple microservices, and cloud functions for serverless computing, which will be used to run calculations on demand. API Gateways will ensure that there is secure communication between services and Kafka for real-time data monitoring and streaming.


Ultra Ethernet Consortium publishes 1.0 specification, readies Ethernet for HPC, AI

Among the key areas of innovation in the UEC 1.0 specification is a new mechanism for network congestion control, which is critical for AI workloads. Metz explained that the UEC’s approach to congestion control does not rely on a lossless network as has traditionally been the case. It also introduces a new mode of operation where the receiver is able to limit sender transmissions as opposed to being passive. “This is critical for AI workloads as these primitives enable the construction of larger networks with better efficiency,” he said. “It’s a crucial element of reducing training and inference time.” ... Metz said that four workgroups got started after the main 1.0 work began, each with their own initiatives that solidify and simplify deploying UEC. These workgroups include: storage, management, compliance and performance. He noted that all of these workgroups have projects that are being developed to strengthen the ease-of-use, efficiency improvements in the next stages and simplified provisioning. UEC is also working on educational materials to help inform networking administrators on UEC technology and concepts. The group is also working industry ecosystem partners. “We have projects with OCP, NVM Express, SNIA, and more – with many more on the way to work on each layer – from the physical to the software,” Metz said. 


From Tools to Teammates: Are AI Agents the New Marketers?

The key difference is that traditional models were built as generic tools designed to perform a wide range of tasks. On the other hand, AI agents are designed to meet businesses’ specific needs. They can train a single agent or a group of them on their data to handle tasks unique to your business. This translates to better outcomes, improved performance, and stronger business impact. Another huge advantage of using AI agents is that they help unify marketing efforts and create a cohesive marketing ecosystem. Another major shift that comes with AI agent implementation is something called AIAO- AI Agent Optimisation. This is highly likely to become the next big alternative to traditional SEO. Now, marketers optimise content around specific keywords like “best project management software.” But with AIAO, that’s changing. AI agents are built to understand and respond to much more complex, conversational queries, like “What’s the best project management tool with timeline boards that works for marketing teams?” It’s no longer about integrating the right phrases into your content. It’s about ensuring your information is relevant, clear, and easy for AI agents to understand and process. Semantic search is going to take the lead. 


Under the Cloud: How Network Design Drives Everything

Let’s be clear, the network isn’t an accessory; it’s the key ingredient that determines how well your cloud performs, how secure your data is, how quickly you can recover from a disaster, and how easily you scale across borders or platforms. Think of it as the highway system beneath your business. Sleek, fast roads make for a smooth ride, while congested or patchy ones will leave you stuck in traffic. ... It’s tempting to get caught up in the flashier parts of cloud infrastructure, like server specs and cutting-edge tools, but none of it works well without a strong network underneath. Here’s the truth. Your network is doing the quiet, behind-the-scenes heavy lifting. It’s what keeps your games lag-free, your financial systems always on, and your hybrid workloads running smoothly across platforms even if it doesn’t get all the attention. You should think of your network as the glue that holds it all together – from your cloud services to your bare metal setup. It is what makes it all possible for AI models to work seamlessly across regions for backups to run smoothly in the background and for your users to enjoy fast, always-on experiences without ever thinking about what’s happening behind the scenes. ... A reliable, secure and performant network is nothing if it can’t be managed the right way. Having the right architecture, tools and knowledge to support it, is key for success.