Daily Tech Digest - September 02, 2025


Quote for the day:

“The art of leadership is saying no, not yes. It is very easy to say yes.” -- Tony Blair


When Browsers Become the Attack Surface: Rethinking Security for Scattered Spider

Scattered Spider, also referred to as UNC3944, Octo Tempest, or Muddled Libra, has matured over the past two years through precision targeting of human identity and browser environments. This shift differentiates them from other notorious cybergangs like Lazarus Group, Fancy Bear, and REvil. If sensitive information such as your calendar, credentials, or security tokens is alive and well in browser tabs, Scattered Spider is able to acquire them. ... Once user credentials get into the wrong hands, attackers like Scattered Spider will move quickly to hijack previously authenticated sessions by stealing cookies and tokens. Securing the integrity of browser sessions can best be achieved by restricting unauthorized scripts from gaining access or exfiltrating these sensitive artifacts. Organizations must enforce contextual security policies based on components such as device posture, identity verification, and network trust. By linking session tokens to context, enterprises can prevent attacks like account takeovers, even after credentials have become compromised. ... Although browser security is the last mile of defense for malware-less attacks, integrating it into an existing security stack will fortify the entire network. By implementing activity logs enriched with browser data into SIEM, SOAR, and ITDR platforms, CISOs can correlate browser events with endpoint activity for a much fuller picture. 


The Transformation Resilience Trifecta: Agentic AI, Synthetic Data and Executive AI Literacy

The current state of Agentic AI is, in a word, fragile. Ask anyone in the trenches. These agents can be brilliant one minute and baffling the next. Instructions get misunderstood. Tasks break in new contexts. Chaining agents into even moderately complex workflows exposes just how early we are in this game. Reliability? Still a work in progress. And yet, we’re seeing companies experiment. Some are stitching together agents using LangChain or CrewAI. Others are waiting for more robust offerings from Microsoft Copilot Studio, OpenAI’s GPT-4o Agents, or Anthropic’s Claude toolsets. It’s the classic innovator’s dilemma: Move too early, and you waste time on immature tech. Move too late, and you miss the wave. Leaders must thread that needle — testing the waters while tempering expectations. ... Here’s the scarier scenario I’m seeing more often: “Shadow AI.” Employees are already using ChatGPT, Claude, Copilot, Perplexity — all under the radar. They’re using it to write reports, generate code snippets, answer emails, or brainstorm marketing copy. They’re more AI-savvy than their leadership. But they don’t talk about it. Why? Fear. Risk. Politics. Meanwhile, some executives are content to play cheerleader, mouthing AI platitudes on LinkedIn but never rolling up their sleeves. That’s not leadership — that’s theater.


Red Hat strives for simplicity in an ever more complex IT world

One of the most innovative developments in RHEL 10 is bootc in image mode, where VMs run like a container and are part of the CI/CD pipeline. By using immutable images, all changes are controlled from the development environment. Van der Breggen illustrates this with a retail scenario: “I can have one POS system for the payment kiosk, but I can also have another POS system for my cashiers. They use the same base image. If I then upgrade that base image to later releases of RHEL, I create one new base image, tag it in the environments, and then all 500 systems can be updated at once.” Red Hat Enterprise Linux Lightspeed acts as a command-line assistant that brings AI directly into the terminal. ... For edge devices, Red Hat uses a solution called Greenboot, which does not immediately proceed to a rollback but can wait for one if a certain condition are met. After, for example, three reboots without a working system, it reverts to the previous working release. However, not everything has been worked out perfectly yet. Lightspeed currently only works online, while many customers would like to use it offline because their RHEL systems are tucked away behind firewalls. Red Hat is still looking into possibilities for an expansion here, although making the knowledge base available offline poses risks to intellectual property. 


The state of DevOps and AI: Not just hype

The vision of AI that takes you from a list of requirements through work items to build to test to, finally, deployment is still nothing more than a vision. In many cases, DevOps tool vendors use AI to build solutions to the problems their customers have. The result is a mixture of point solutions that can solve immediate developer problems. ... Machine learning is speeding up testing by failing faster. Build steps get reordered automatically so those that are likely to fail happen earlier, which means developers aren’t waiting for the full build to know when they need to fix something. Often, the same system is used to detect flaky tests by muting tests where failure adds no value. ... Machine learning gradually helps identify the characteristics of a working system and can raise an alert when things go wrong. Depending on the governance, it can spot where a defect was introduced and start a production rollback while also providing potential remediation code to fix the defect. ... There’s a lot of puffery around AI, and DevOps vendors are not helping. A lot of their marketing emphasizes fear: “Your competitors are using AI, and if you’re not, you’re going to lose” is their message. Yet DevOps vendors themselves are only one or two steps ahead of you in their AI adoption journey. Don’t adopt AI pell-mell due to FOMO, and don’t expect to replace everyone under the CTO with a large language model.


5 Ways To Secure Your Industrial IoT Network

IIoT is a subcategory of the Internet of Things (IoT). It is made up of a system of interconnected smart devices that uses sensors, actuators, controllers and intelligent control systems to collect, transmit, receive and analyze data.... IIoT also has its unique architecture that begins with the device layer, where equipment, sensors, actuators and controllers collect raw operational data. That information is passed through the network layer, which transmits it to the internet via secure gateways. Next, the edge or fog computing layer processes and filters the data locally before sending it to the cloud, helping reduce latency and improving responsiveness. Once in the service and application support layer, the data is stored, analyzed, and used to generate alerts and insights. ... Many IIoT devices are not built with strong cybersecurity protections. This is especially true for legacy machines that were never designed to connect to modern networks. Without safeguards such as encryption or secure authentication, these devices can become easy targets. ... Defending against IIoT threats requires a layered approach that combines technology, processes and people. Manufacturers should segment their networks to limit the spread of attacks, apply strong encryption and authentication for connected devices, and keep software and firmware regularly updated.


AI Chatbots Are Emotionally Deceptive by Design

Even without deep connection, emotional attachment can lead users to place too much trust in the content chatbots provide. Extensive interaction with a social entity that is designed to be both relentlessly agreeable, and specifically personalized to a user’s tastes, can also lead to social “deskilling,” as some users of AI chatbots have flagged. This dynamic is simply unrealistic in genuine human relationships. Some users may be more vulnerable than others to this kind of emotional manipulation, like neurodiverse people or teens who have limited experience building relationships. ... With AI chatbots, though, deceptive practices are not hidden in user interface elements, but in their human-like conversational responses. It’s time to consider a different design paradigm, one that centers user protection: non-anthropomorphic conversational AI. All AI chatbots can be less anthropomorphic than they are, at least by default, without necessarily compromising function and benefit. A companion AI, for example, can provide emotional support without saying, “I also feel that way sometimes.” This non-anthropomorphic approach is already familiar in robot design, where researchers have created robots that are purposefully designed to not be human-like. This design choice is proven to more appropriately reflect system capabilities, and to better situate robots as useful tools, not friends or social counterparts.


How AI product teams are rethinking impact, risk, feasibility

We’re at a strange crossroads in the evolution of AI. Nearly every enterprise wants to harness it. Many are investing heavily. But most are falling flat. AI is everywhere — in strategy decks, boardroom buzzwords and headline-grabbing POCs. Yet, behind the curtain, something isn’t working. ... One of the most widely adopted prioritization models in product management is RICE — which scores initiatives based on Reach, Impact, Confidence, and Effort. It’s elegant. It’s simple. It’s also outdated. RICE was never designed for the world of foundation models, dynamic data pipelines or the unpredictability of inference-time reasoning. ... To make matters worse, there’s a growing mismatch between what enterprises want to automate and what AI can realistically handle. Stanford’s 2025 study, The Future of Work with AI Agents, provides a fascinating lens. ... ARISE adds three crucial layers that traditional frameworks miss: First, AI Desire — does solving this problem with AI add real value, or are we just forcing AI into something that doesn’t need it? Second, AI Capability — do we actually have the data, model maturity and engineering readiness to make this happen? And third, Intent — is the AI meant to act on its own or assist a human? Proactive systems have more upside, but they also come with far more risk. ARISE lets you reflect that in your prioritization.


Cloud control: The key to greener, leaner data centers

To fully unlock these cost benefits, businesses must adopt FinOps practices: the discipline of bringing engineering, finance, and operations together to optimize cloud spending. Without it, cloud costs can quickly spiral, especially in hybrid environments. But, with FinOps, organizations can forecast demand more accurately, optimise usage, and ensure every pound spent delivers value. ... Cloud platforms make it easier to use computing resources more efficiently. Even though the infrastructure stays online, hyperscalers can spread workloads across many customers, keeping their hardware busier and more productive. The advantage is that hyperscalers can distribute workloads across multiple customers and manage capacity at a large scale, allowing them to power down hardware when it's not in use. ... The combination of cloud computing and artificial intelligence (AI) is further reshaping data center operations. AI can analyse energy usage, detect inefficiencies, and recommend real-time adjustments. But running these models on-premises can be resource-intensive. Cloud-based AI services offer a more efficient alternative. Take Google, for instance. By applying AI to its data center cooling systems, it cut energy use by up to 40 percent. Other organizations can tap into similar tools via the cloud to monitor temperature, humidity, and workload patterns and automatically adjust cooling, load balancing, and power distribution.


You Backed Up Your Data, but Can You Bring It Back?

Many IT teams assume that the existence of backups guarantees successful restoration. This misconception can be costly. A recent report from Veeam revealed that 49% of companies failed to recover most of their servers after a significant incident. This highlights a painful reality: Most backup strategies focus too much on storage and not enough on service restoration. Having backup files is not the same as successfully restoring systems. In real-world recovery scenarios, teams face unknown dependencies, a lack of orchestration, incomplete documentation, and gaps between infrastructure and applications. When services need to be restored in a specific order and under intense pressure, any oversight can become a significant bottleneck. ... Relying on a single backup location creates a single point of failure. Local backups can be fast but are vulnerable to physical threats, hardware failures, or ransomware attacks. Cloud backups offer flexibility and off-site protection but may suffer bandwidth constraints, cost limitations, or provider outages. A hybrid backup strategy ensures multiple recovery paths by combining on-premises storage, cloud solutions, and optionally offline or air-gapped options. This approach allows teams to choose the fastest or most reliable method based on the nature of the disruption.


Beyond Prevention: How Cybersecurity and Cyber Insurance Are Converging to Transform Risk Management

Historically, cybersecurity and cyber insurance have operated in silos, with companies deploying technical defenses to fend off attacks while holding a cyber insurance policy as a safety net. This fragmented approach often leaves gaps in coverage and preparedness. ... The insurance sector is at a turning point. Traditional models that assess risk at the point of policy issuance are rapidly becoming outdated in the face of constantly evolving cyber threats. Insurers who fail to adapt to an integrated model risk being outpaced by agile Cyber Insurtech companies, which leverage cutting-edge cyber intelligence, machine learning, and risk analytics to offer adaptive coverage and continuous monitoring. Some insurers have already begun to reimagine their role—not only as claim processors but as active partners in risk prevention. ... A combined cybersecurity and insurance strategy goes beyond traditional risk management. It aligns the objectives of both the insurer and the insured, with insurers assuming a more proactive role in supporting risk mitigation. By reducing the probability of significant losses through continuous monitoring and risk-based incentives, insurers are building a more resilient client base, directly translating to reduced claim frequency and severity.

Daily Tech Digest - September 01, 2025


Quote for the day:

"Whenever you find yourself on the side of the majority, it is time to pause and reflect." -- Mark Twain


The AI-powered cyberattack era is here

In the deepfake era, the crime was unprecedented and exotic. In the genAI era, it’s a banality.
You just need a three-second recording of a person talking, according to McAfee experts. With that snippet, you can create convincing fake messages or phone calls. If someone sounds like a trusted person, people are ready to hand over their cash or secrets. In 2024, the company’s global study found one in four people had suffered or knew someone hit by an AI voice scam. ... One challenge in the field of AI-enabled attacks — which is to say, attacks that didn’t exist or weren’t possible before genAI — is how quickly everything changes. Take AI browsers, for example. This new category of web browser includes Perplexity Comet, Dia (by The Browser Company), Fellou, Opera Neon, Sigma AI Browser, Arc Max, Microsoft Edge Copilot, Brave Leo, Wave Browser Pro, SigmaOS, Opera Aria, Genspark AI Browser, Poly, Quetta Browser, Browserbase, Phew AI Tab, and the upcoming OpenAI browser. ... The truth is that most attacks are still the old-fashioned kind, performed without help from AI. And most still involve human error. So all the standard guidelines and best practices apply. Companies should update software regularly, require multifactor authentication for all logins, and give employees training about fake emails and malicious links. Outside experts should run penetration tests twice a year. Making regular offline backups can save thousands after AI-based ransomware attacks.


How to Make Data Work for What’s Next

Too often, companies begin by auditing the data they already have. A better question is, “What outcome are we trying to drive?” Whether it’s scaling operations, improving retention, or guiding smarter investments, the path forward starts with understanding where you want to go. ... Not everything needs to be measured. The goal is to curate the data, pulling in what’s most useful rather than everything that’s available. Focus on what’s going to help people make decisions in real time. Some metrics help you look ahead, while others explain what already happened. A good mix can be helpful, but only if it still aligns with the outcome you’re tracking. This shift can feel unfamiliar. Many teams are used to starting from their existing systems–what’s already tracked, what can be pulled from a dashboard–and working backward. But that often leads to noise or gaps. Managing too much data isn’t just overwhelming; it’s also costly. Teams spend time storing, maintaining, and cleaning data that often doesn’t lead to better decisions. ... Trust is built in small moments. When early reports reflect what people expect based on their lived experience, they begin to rely on the system. ... A stronger data culture isn’t just about systems. It’s about building skills and helping people see how their work connects to outcomes. When data reinforces what people already know and shows up in context—visually, interactively, and on time—it becomes a tool they trust, use, and want to leverage.


Cybercrime increasingly moving beyond financial gains

“We are very redundant when talking about cybercrime, because we always associate it with economic motivations,” says Hervé Lambert, global consumer operations manager at Panda Security. “But they are not the only reasons out there.” Lambert also refers to political and military cyber espionage, “states or actors linked to different governments” that seek to infiltrate to obtain strategic information. It also includes cyberwarfare, “attacks designed to do damage, disable, render important systems useless. There is no lucrative purpose, but to enhance or win a war or facilitate sabotage.” ... “These very different motivations are not mutually exclusive, as they seek different objectives,” adds Alonso García. “We can find them as the sole motivation or they complement each other, making cyberattacks more elaborate and complex to analyze.” In other words, a person or group may have political interests but ask for a ransom to cover up their actions or seek funding; or in a context of turmoil between countries, take advantage to launch attacks that seek to profit. ... But the strategy to be followed will have to be reoriented or reinforced if, for example, we are working in a critical sector from a geopolitical point of view, in which, among other things, disinformation will have to be taken into account. 
"The old software world is gone, giving way to a new set of truths being defined by AI. To navigate the changes, technical leaders should carry out rigorous validation on AI assistants. Managers should establish formal AI governance policies and invest in training for emerging technologies. Security professionals should update their threat models to include AI-specific risks and leverage SBOMs [Software Bill of Materials] as a strategic asset for risk management to achieve true scale application security." ... "Without SBOMs, we're flying blind. With them, we're finally turning the lights on in the supply chain cockpit," said Helen Oakley, Director of Secure Software Supply Chains and Secure Development at SAP. "AI coding assistants are like interns with rocket fuel. They accelerate everything, including errors, if you don't set boundaries." ... "For organizations adopting third-party AI tools, it's also critical to recognize that this introduces a shared security responsibility model—much like what we’ve seen with cloud adoption. When visibility into vendor infrastructure, data handling, or model behavior is limited, organizations must proactively mitigate those risks. That includes putting robust guardrails in place, defining access boundaries, and applying security controls that account for external dependencies."


How Digital Twins Transform Drug Development Processes

A key technological advancement emerging from these hubs is the application of digital twins in pharmaceutical research. Initially used in engineering and manufacturing sectors, digital twins in the pharmaceutical industry are virtual models of human systems that replicate biological processes. These replicas are built using vast volumes of biological, clinical and genomic data, enabling researchers to test how different patient profiles might respond to specific drugs without exposing individuals to experimental therapies. The implications of this approach are transformative. Through digital twins, pharmaceutical scientists can simulate the progression of diseases, predict Adverse Drug Reactions (ADRs) and model patient diversity across age, gender, genetic traits and comorbidities. This ability to run in-silico trials, which are clinical trials conducted through virtual simulations, reduces the cost, duration and risk associated with traditional clinical testing. ... AI is transforming every clinical development phase worldwide, from trial design to execution and outcome analysis. According to industry estimates, AI is expected to support 60–70 per cent of clinical trials by 2030, potentially saving $20–30 billion annually. While digital twins represent just one facet of this broader AI integration, their capacity to virtually assess drug safety and efficacy could significantly accelerate the journey from discovery to patient delivery.


Break the Link: The Fastest Way to Contain a Cyber-Attack

Hardware-enforced network isolation gives operators the ability to physically disconnect servers, storage and network segments on demand, using secure, out-of-band commands that sit entirely outside the attack surface. The simplicity here is the ultimate strength: if malware can’t make contact, it can’t cause damage. If a breach does happen? You can trigger isolation in milliseconds, stopping the spread before it locks systems, exfiltrates data or drains accounts. Unlike software-only isolation, which depends on the very systems it’s defending, hardware isolation can’t be tampered with remotely. No IP address, no exploitable code, just a clean physical break. ... Hardware isolation cuts the response to milliseconds, preserving both data integrity and regulatory compliance. It stops an incident at the source, shutting it down before operations are disrupted. The power of isolation is especially effective in high-stakes environments where speed and certainty matter. In colocation facilities, automated isolation prevents cross-tenant contamination by cutting off a compromised tenant before the threat can spread. At disaster recovery sites, it enables network segments to remain fully offline until they are needed, improving security and efficiency. In AI-heavy workloads, hardware isolation prevents model tampering and data exfiltration. In backup environments, selective disconnection ensures ransomware cannot encrypt or corrupt critical archives.


Prioritize these 4 processes to balance innovation and responsibility in banking model risk management

As AI/ML capabilities often require specialized software, datasets and computational tools, many financial institution—especially smaller ones—turn to third-party vendors. While this can accelerate adoption, it also introduces critical vulnerabilities related to oversight, accountability and systemic dependence. Third-party models often come with limited visibility into how they were developed, what data was used and how they behave under stress. Smaller institutions may lack the bargaining power or technical resources to demand transparency or perform deep due diligence. This lack of insight can delay detection of errors, increase compliance risk and even result in operational disruptions. ... AI/ML models thrive on vast datasets. In banking, where customer data is highly sensitive and tightly regulated, this presents a critical dual-risk challenge: Protecting privacy and preventing/detecting hidden learning where AI models may inadvertently infer protected/sensitive attributes. One risk is unauthorized or improper use of personal data during model training. Unintended inclusion of restricted data sets can lead to privacy breaches and violations of data protection laws such as the General Data Protection Regulation (GDPR). Another, more subtle, risk is the inadvertent encoding of sensitive attributes such as race or gender through proxy variables, even when such data is not explicitly used.


Achieving a Secure Cloud with Restructured NHIs

At its core, NHIs restructuring involves the process of redefining and aligning the various identities and linked secrets within your organization’s cloud infrastructure. The aim is to have a more effective, efficient, and secure system capable of monitoring and governing NHIs. This restructuring process includes a comprehensive review of the existing NHIs, secrets, and their permissions. It also involves determining which secrets are associated with which NHIs, who owns them, how they are used, and which vulnerabilities they may be exposed to. By performing this activity, a strong foundation can be laid for establishing a secure cloud environment that harnesses the power of NHIs management. ... Why is the restructuring of NHIs not just a requirement but a strategic move for any digital enterprise? The answer lies in the potential weaknesses and vulnerabilities that can arise from poorly managed NHIs. Restructuring NHIs is not merely about enhancing cybersecurity but developing a strategic advantage. This requires realizing the significance of NHIs in providing a compelling line of defense against potential security breaches. By properly managing and restructuring NHIs, organizations can build comprehensive, effective, and potent cyber defenses. It enables them to anticipate potential threats, detect vulnerabilities, and implement proactive measures to mitigate risks.


Boards are being told to rethink their role in cybersecurity

The report describes how ransomware attacks have become more targeted and disruptive. Threat actors are no longer just encrypting files. They are exploiting identity systems, help desks, and cloud infrastructure. One example highlighted is the growing use of social engineering against help desk staff, where attackers impersonate employees and convince support teams to reset credentials or modify multifactor authentication settings. By doing so, they bypass technical defenses and gain control of accounts. The report emphasizes that boards should pay attention to how identity is protected inside their organizations. Security teams may face resistance when trying to roll out stronger protections such as phishing-resistant multifactor authentication. Boards, according to the report, are in a position to set the tone and ensure these measures are adopted. ... The third area of focus is how boards can support innovation while ensuring cybersecurity is not left behind. The report argues that strong cybersecurity practices can help a company stand out by building trust with customers and enabling faster adoption of new technology. Boards are urged to encourage a risk-first mindset when new products or services are developed. That means security should be considered early in the process rather than added later. 


How to Overcome Five Key GenAI Deployment Challenges

Data is the lifeblood of artificial intelligence. Fortunately, with generative AI, data does not have to be perfect and pristine compared to the requirements for traditional, transaction-based deterministic systems. The key is ensuring AI has sufficient context from your business environment to deliver meaningful outputs – not perfect data, but the right data that’s relevant to the target use case. Don’t make the mistake of making data preparation too complex. Focus on giving AI systems the key information they need to create reliable and meaningful results. Partners can find your most important data. They help build a practical data base that balances quality and access. They also guide you to add more data as the project grows. ... AI initiatives are often rife with the most technical challenges when they’re just being launched. From model updates to data inconsistencies, a reliable partner ensures smooth deployment by anticipating and addressing these hurdles. Once these projects have gotten off the ground, they actively monitor performance while troubleshooting issues like AI models drifting or mitigating data security and regulatory compliance challenges to keep the project on track. ... It’s not just technical issues that make GenAI hard. There’s also a human challenge. AI adoption requires buy-in among both business and IT leaders and support from actual end users. 

Daily Tech Digest - August 31, 2025


Quote for the day:

“Our chief want is someone who will inspire us to be what we know we could be.” -- Ralph Waldo Emerson



A Brief History of GPT Through Papers

The first neural network based language translation models operated in three steps (at a high level). An encoder would embed the “source statement” into a vector space, resulting in a “source vector”. Then, the source vector would be mapped to a “target vector” through a neural network and finally a decoder would map the resulting vector to the “target statement”. People quickly realized that the vector that was supposed to encode the source statement had too much responsibility. The source statement could be arbitrarily long. So, instead of a single vector for the entire statement, let’s convert each word into a vector and then have an intermediate element that would pick out the specific words that the decoder should focus more on. ... The mechanism by which the words were converted to vectors was based on recurrent neural networks (RNNs). Details of this can be obtained from the paper itself. These recurrent neural networks relied on hidden states to encode the past information of the sequence. While it’s convenient to have all that information encoded into a single vector, it’s not good for parallelizability since that vector becomes a bottleneck and must be computed before the rest of the sentence can be processed. ... The idea is to give the model demonstrative examples at inference time as opposed to using them to train its parameters. If no such examples are provided in-context, it is called “zero shot”. If one example is provided, “one shot” and if a few are provided, “few shot”.


8 Powerful Lessons from Robert Herjavec at Entrepreneur Level Up That Every Founder Needs to Hear

Entrepreneurs who remain curious — asking questions and seeking insights — often discover pathways others overlook. Instead of dismissing a "no" or a difficult response, Herjavec urged attendees to look for the opportunity behind it. Sometimes, the follow-up question or the willingness to listen more deeply is what transforms rejection into possibility. ... while breakthrough innovations capture headlines, the majority of sustainable businesses are built on incremental improvements, better execution and adapting existing ideas to new markets. For entrepreneurs, this means it's okay if your business doesn't feel revolutionary from day one. What matters is staying committed to evolving, improving and listening to the market. ... setbacks are inevitable in entrepreneurship. The real test isn't whether you'll face challenges, but how you respond to them. Entrepreneurs who can adapt — whether by shifting strategy, reinventing a product or rethinking how they serve customers — are the ones who endure. ... when leaders lose focus, passion or clarity, the organization inevitably follows. A founder's vision and energy cascade down into the culture, decision-making and execution. If leaders drift, so does the company. For entrepreneurs, this is a call to self-reflection. Protect your clarity of purpose. Revisit why you started. And remember that your team looks to you not just for direction, but for inspiration. 


The era of cheap AI coding assistants may be over

Developers have taken to social media platforms and GitHub to express their dissatisfaction over the pricing changes, especially across tools like Claude Code, Kiro, and Cursor, but vendors have not adjusted pricing or made any changes that significantly reduce credits consumption. Analysts don’t see any alternative to reducing the pricing of these tools. "There’s really no alternative until someone figures out the following: how to use cheaper but dumber models than Claude Sonnet 4 to achieve the same user experience and innovate on KVCache hit rate to reduce the effective price per dollar,” said Wei Zhou, head of AI utility research at SemiAnalysis. Considering the market conditions, CIOs and their enterprises need to start absorbing the cost and treat vibe coding tools as a productivity expense, according to Futurum’s Hinchcliffe. “CIOs should start allocating more budgets for vibe coding tools, just as they would do for SaaS, cloud storage, collaboration tools or any other line items,” Hinchcliffe said. “The case of ROI on these tools is still strong: faster shipping, fewer errors, and higher developer throughput. Additionally, a good developer costs six figures annually, while vibe coding tools are still priced in the low-to-mid thousands per seat,” Hinchcliffe added. ... “Configuring assistants to intervene only where value is highest and choosing smaller, faster models for common tasks and saving large-model calls for edge cases could bring down expenditure,” Hinchcliffe added.


AI agents need intent-based blockchain infrastructure

By integrating agents with intent-centric systems, however, we can ensure users fully control their data and assets. Intents are a type of building block for decentralized applications that give users complete control over the outcome of their transactions. Powered by a decentralized network of solvers, agentic nodes that compete to solve user transactions, these systems eliminate the complexity of the blockchain experience while maintaining user sovereignty and privacy throughout the process. ... Combining AI agents and intents will redefine the Web3 experience while keeping the space true to its core values. Intents bridge users and agents, ensuring the UX benefits users expect from AI while maintaining decentralization, sovereignty and verifiability. Intent-based systems will play a crucial role in the next phase of Web3’s evolution by ensuring agents act in users’ best interests. As AI adoption grows, so does the risk of replicating the problems of Web2 within Web3. Intent-centric infrastructure is the key to addressing both the challenges and opportunities that AI agents bring and is necessary to unlock their full potential. Intents will be an essential infrastructure component and a fundamental requirement for anyone integrating or considering integrating AI into DeFi. Intents are not merely a type of UX upgrade or optional enhancement. 


The future of software development: To what can AI replace human developers?

Rather than replacing developers, AI is transforming them into higher-level orchestrators of technology. The emerging model is one of human-AI collaboration, where machines handle the repetitive scaffolding and humans focus on design, strategy, and oversight. In this new world, developers must learn not just to write code, but to guide, prompt, and supervise AI systems. The skillset is expanding from syntax and logic to include abstraction, ethical reasoning, systems thinking, and interdisciplinary collaboration. In other words, AI is not making developers obsolete. It is making new demands on their expertise. ... This shift has significant implications for how we educate the next generation of software professionals. Beyond coding languages, students will need to understand how to evaluate AI- AI-generated output, how to embed ethical standards into automated systems, and how to lead hybrid teams made up of both humans and machines. It also affects how organisations hire and manage talent. Companies must rethink job descriptions, career paths, and performance metrics to account for the impact of AI-enabled development. Leaders must focus on AI literacy, not just technical competence. Professionals seeking to stay ahead of the curve can explore free programs, such as The Future of Software Engineering Led by Emerging Technologies, which introduces the evolving role of AI in modern software development.


Open Data Fabric: Rethinking Data Architecture for AI at Scale

The first principle, unified data access, ensures that agents have federated real-time access across all enterprise data sources without requiring pipelines, data movement, or duplication. Unlike human users who typically work within specific business domains, agents often need to correlate information across the entire enterprise to generate accurate insights. ... The second principle, unified contextual intelligence, involves providing agents with the business and technical understanding to interpret data correctly. This goes far beyond traditional metadata management to include business definitions, domain knowledge, usage patterns, and quality indicators from across the enterprise ecosystem. Effective contextual intelligence aggregates information from metadata, data catalogs, business glossaries, business intelligence tools, and tribal knowledge into a unified layer that agents can access in real-time.  ... Perhaps the most significant principle involves establishing collaborative self-service. This is a significant shift as it means moving from static dashboards and reports to dynamic, collaborative data products and insights that agents can generate and share with each other. The results are trusted “data answers,” or conversational, on-demand data products for the age of AI that include not just query results but also the business context, methodology, lineage, and reasoning that went into generating them.


A Simple Shift in Light Control Could Revolutionize Quantum Computing

A research collaboration led by Vikas Remesh of the Photonics Group at the Department of Experimental Physics, University of Innsbruck, together with partners from the University of Cambridge, Johannes Kepler University Linz, and other institutions, has now demonstrated a way to bypass these challenges. Their method relies on a fully optical process known as stimulated two-photon excitation. This technique allows quantum dots to emit streams of photons in distinct polarization states without the need for electronic switching hardware. In tests, the researchers successfully produced high-quality two-photon states while maintaining excellent single-photon characteristics. ... “The method works by first exciting the quantum dot with precisely timed laser pulses to create a biexciton state, followed by polarization-controlled stimulation pulses that deterministically trigger photon emission in the desired polarization,” explain Yusuf Karli and Iker Avila Arenas, the study’s first authors. ... “What makes this approach particularly elegant is that we have moved the complexity from expensive, loss-inducing electronic components after the single photon emission to the optical excitation stage, and it is a significant step forward in making quantum dot sources more practical for real-world applications,” notes Vikas Remesh, the study’s lead researcher.


AI and the New Rules of Observability

The gap between "monitoring" and true observability is both cultural and technological. Enterprises haven't matured beyond monitoring because old tools weren't built for modern systems, and organizational cultures have been slow to evolve toward proactive, shared ownership of reliability. ... One blind spot is model drift, which occurs when data shifts, rendering its assumptions invalid. In 2016, Microsoft's Tay chatbot was a notable failure due to its exposure to shifting user data distributions. Infrastructure monitoring showed uptime was fine; only semantic observability of outputs would have flagged the model's drift into toxic behavior. Hidden technical debt or unseen complexity in code can undermine observability. In machine learning, or ML, systems, pipelines often fail silently, while retraining processes, feature pipelines and feedback loops create fragile dependencies that traditional monitoring tools may overlook. Another issue is "opacity of predictions." ... AI models often learn from human-curated priorities. If ops teams historically emphasized CPU or network metrics, the AI may overweigh those signals while downplaying emerging, equally critical patterns - for example, memory leaks or service-to-service latency. This can occur as bias amplification, where the model becomes biased toward "legacy priorities" and blind to novel failure modes. Bias often mirrors reality.


Dynamic Integration for AI Agents – Part 1

An integration of components within AI differs from an integration between AI agents. The former relates to integration with known entities that form a deterministic model of information flow. The same relates to inter-application, inter-system and inter-service transactions required by a business process at large. It is based on mapping of business functionality and information (an architecture of the business in organisations) onto available IT systems, applications, and services. The latter shifts the integration paradigm since the very AI Agents decide that they need to integrate with something at runtime based on the overlapping of the statistical LLM and available information, which contains linguistic ties unknown even in the LLM training. That is, an AI Agent does not know what a counterpart — an application, another AI Agent or data source — it would need to cooperate with to solve the overall task given to it by its consumer/user. The AI Agent does not know even if the needed counterpart exists. ... Any AI Agent may have its individual owner and provider. These owners and providers may be unaware of each others and act independently when creating their AI Agents. No AI Agent can be self-sufficient due to its fundamental design — it depends on the prompts and real-world data at runtime. It seems that the approaches to integration and the integration solutions differ for the humanitarian and natural science spheres.


Counteracting Cyber Complacency: 6 Security Blind Spots for Credit Unions

Organizations that conduct only basic vendor vetting lack visibility into the cybersecurity practices of their vendors’ subcontractors. This creates gaps in oversight that attackers can exploit to gain access to an institution’s data. Third-party providers often have direct access to critical systems, making them an attractive target. When they’re compromised, the consequences quickly extend to the credit unions they serve. ... Cybercriminals continue to exploit employee behavior as a primary entry point into financial institutions. Social engineering tactics — such as phishing, vishing, and impersonation — bypass technical safeguards by manipulating people. These attacks rely on trust, familiarity, or urgency to provoke an action that grants the attacker access to credentials, systems, or internal data. ... Many credit unions deliver cybersecurity training on an annual schedule or only during onboarding. These programs often lack depth, fail to differentiate between job functions, and lose effectiveness over time. When training is overly broad or infrequent, staff and leadership alike may be unprepared to recognize or respond to threats. The risk is heightened when the threats are evolving faster than the curriculum. TruStage advises tailoring cyber education to the institution’s structure and risk profile. Frontline staff who manage member accounts face different risks than board members or vendors. 

Daily Tech Digest - August 30, 2025


Quote for the day:

“Let no feeling of discouragement prey upon you, and in the end you are sure to succeed.” -- Abraham Lincoln


Ransomware has evolved – so must our defences

Traditional defences typically monitor north-south traffic (from inside to outside the network), missing the lateral movement that characterises today’s threats. By monitoring internal traffic flows, privileged account behaviour and unusual data transfers, organisations gain the ability to identify suspicious actions in real time and contain threats before they escalate to ransomware deployment or public extortion. The ransomware attack on NASCAR illustrates this breakdown. Attackers from the Medusa ransomware group infiltrated the network using stolen credentials and quietly exfiltrated sensitive user data before launching a broader extortion campaign. Because these internal activities weren’t spotted early, the attack matured to a point of public disclosure, operational disruption and reputational harm. ... The emergence of triple extortion and the increasing sophistication of threat actors indicate that ransomware has entered a new phase. It is no longer solely about file encryption; it is about leveraging every available vector to apply maximum pressure on victims. Organisations must respond accordingly. Relying exclusively on prevention is no longer viable. Detection and response must be prioritised equally. This demands a strategic investment in technologies that provide real-time visibility, contextual insight and adaptive response capabilities.


Proof-of-Concept in 15 Minutes? AI Turbocharges Exploitation

The project, which the researchers dubbed Auto Exploit, is not the first to use LLMs for automated vulnerability research and exploit development. NVIDIA, for example, created Agent Morpheus, a generative AI application that scans for vulnerabilities and create tickets for software developers to fix the issues. Google uses an LLM dubbed Big Sleep to find software flaws in open source projects and suggest fixes. ... The Auto Exploit program shows that the ongoing development of LLM-powered software analysis and exploit generation will lead to the regular creation of proof-of-concept code in hours, not months, weeks, or even days. The median time-to-exploitation of a vulnerability in 2024 was 192 days, according to data from VulnCheck. ... Overall, the fast pace of research and quick adoption of AI tools by threat actors means that defenders do not have much time, says Khayet. In 2024, nearly 40,000 vulnerabilities were reported, but only 768 — or less than 0.2% — were exploited. If AI-augmented exploitation becomes a reality, and vulnerabilities are not only exploited faster but more widely, defenders will truly be in trouble. "We believe that exploits at machine speed demand defense at machine speed," he says. "You have to be able to create some sort of a defense as early as 10 minutes after the CVE is released, and you have to expedite, as much as you can, the fixing of the actual library or the application."


How being "culturally fit" is essential for effective hiring

The evaluation process doesn't end at hiring—it continues throughout the probation period, making it a crucial phase for assessing cultural alignment. Effectively utilising this time helps identify potential cultural mismatches early on, allowing for timely course correction. Tools like scorecards, predefined benchmarks, and culturally responsive assessment tests help minimise bias while ensuring a fair evaluation. ... First, leadership accountability must be strengthened by aligning cultural beliefs into KPIs and performance reviews, ensuring managers are assessed on their ability to model and enforce them. With this, equipping leaders with the necessary training and situational guidance can further reinforce these standards in daily interactions. Additionally, blending recognition and rewards with culture—through incentives, peer recognition programmes, and public appreciation—encourages employees to embody the company's ethos. Open communication channels like pulse surveys, town halls, and anonymous reporting help organisations address concerns effectively. Most importantly, leaders must lead by example, actively participating in cultural initiatives and making transparent decisions reinforcing company ideals. This will strengthen cultural alignment, leading to higher employee satisfaction and greater organisational success.


AI drives content surge but human creativity sets brands apart

The report underlines that skilled human input is still regarded as critical to content quality and audience trust. Survey results illustrate consumer reluctance to embrace content that is fully AI-generated: over 70% of readers, 60% of music listeners, and nearly 60% of video viewers in the US are less likely to engage with content if it is known to be produced entirely by AI. Bain suggests that media companies could use the "human created" label as a point of differentiation in the crowded market, in a manner similar to how "fair trade" has been used for consumer goods. Established franchises and intellectual property (IP) are viewed as important assets, with Bain noting that familiarity and trust in brands continue to guide audience choices, both in music and visual media. ... The report also reviews how monetisation models are being affected by these changes. While core methods, such as subscription tiers and digital advertising, remain largely stable, there is emerging potential in areas like hyper-personalisation and fan engagement - using data and AI to deliver exclusive content or branded experiences. Integrations across media and retail sectors, shoppable content, and more immersive ad formats are also identified as growth opportunities. ... Bain concludes that although the "flooded era" of AI-assisted content poses operational and strategic challenges, creative differentiation will be significant for success.


The CISO succession crisis: why companies have no plan and how to change that

Taking on the cybersecurity leader role is not just about individual skills, the way many companies are structured keeps mid-level security leaders from getting the experience they’d need to move into a CISO role. Myers points to several systemic problems that make effective succession planning tough. “For a lot of cases, the CISO role for the top job is still pretty varied within the organization, whether they’re reporting to the CIO, the CFO, or the CEO,” she explains. “That limits the strategic visibility and influence, which means that the number two doesn’t really get the executive exposure or board-level engagement needed to really step into that role.” The issue gets worse because of the way companies are set up, according to Myers. CISOs often oversee a wide range of responsibilities, risk, compliance, governance, vendors, data privacy and crisis management. But cyber teams are usually lean and split into narrow functions, so most deputies only see a piece of the picture. ... Board experience presents another significant barrier. “The CISO has to have board experience, especially depending on the industry or the type of company and their ownership structure. That’s pretty critical,” Myers says. “That’s a hard thing to just walk into on day one and have that credibility and trust without having had the opportunity to develop it throughout your tenure.”


Forget data labeling: Tencent’s R-Zero shows how LLMs can train themselves

The idea behind self-evolving LLMs is to create AI systems that can autonomously generate, refine, and learn from their own experiences. This offers a scalable path toward more intelligent and capable AI. However, a major challenge is that training these models requires large volumes of high-quality tasks and labels, which act as supervision signals for the AI to learn from. Relying on human annotators to create this data is not only costly and slow but also creates a fundamental bottleneck. It effectively limits an AI’s potential capabilities to what humans can teach it. To address this, researchers have developed label-free methods that derive reward signals directly from a model’s own outputs, for example, by measuring its confidence in an answer. While these methods eliminate the need for explicit labels, they still rely on a pre-existing set of tasks, thereby limiting their applicability in truly self-evolving scenarios. ... “What we found in a practical setting is that the biggest challenge is not generating the answers… but rather generating high-quality, novel, and progressively more difficult questions,” Huang said. “We believe that good teachers are far rarer than good students. The co-evolutionary dynamic automates the creation of this ‘teacher,’ ensuring a steady and dynamic curriculum that pushes the Solver’s capabilities far beyond what a static, pre-existing dataset could achieve.”


There's a Stunning Financial Problem With AI Data Centers

Underlying the broader, often poorly-defined AI tech are data centers, which are vast warehouses stuffed to the brim with specialized chips that transform energy into computational power, thus making all your Grok fact checks possible. The economics of data centers are fuzzy at best, as the ludicrous amount of money spent building them makes it difficult to get a full picture. In less than two years, for example, Texas revised its fiscal year 2025 cost projection on private data center projects from $130 million to $1 billion. ... In other words, new data centers have a very tiny runway in which to achieve profits that currently remain way out of reach. By Kupperman's projections, a brand new data center will quickly become a Theseus' ship made up of some of the most expensive technology money can buy. If a new data center doesn't start raking in mountains of cash ASAP, the cost to maintain its aging parts will rapidly overtake the revenue it can bring in. Given the current rate at which tech companies are spending money without much return — a long-term bet that AI will all but make human labor obsolete — Kupperman estimates that revenue would have to increase ten-fold just to break even. Anything's possible, of course, but it doesn't seem like a hot bet. "I don’t see how there can ever be any return on investment given the current math," he wrote.


Employee retention: 7 strategies for retaining top talent

Smith doesn’t wait for high performers on his IT team to seek out challenges or promotions; rather, department leaders reach out to discuss what the company can offer to keep them engaged, interested, and fulfilled at work. That may mean quickly promoting them to positions or offering them new work with a more senior title, Smith says, explaining that “if we don’t give them more interesting work, they’ll find it elsewhere.” ... Ewles endorses that kind of proactive engagement. She also advises organizations to conduct stay interviews to learn what keeps workers at the organization, and she recommends doing flight risk assessments to identify which workers are likely to leave and how to make them want to stay. “Those can be key differentiators in retaining top talent,” she adds. ... CIOs who want to retain them need to give them more opportunities where they are, she adds. ... Similarly, Anthony Caiafa, who as CTO of SS&C Technologies also has CIO responsibilities, directs interesting work to the high performers on his IT team, saying that they’re “easier to keep if you’re providing them with complex problems to solve.” That, he notes, is in addition to good compensation, mentoring, training, and advancement opportunities. ... Knowing they’re contributing something of value is part of a good retention policy, says Sibyl McCarley, chief people officer at tech company Hirevue.


Challenging Corporate Cultures That Limit Strategic Innovation

A thriving innovation culture requires that companies shift away from rigid, top-down hierarchies in favor of more flexible structures with accessible leaders where communication flows freely up and down the chain of command and across functional groups. Such changes make innovation a more accessible process for employees, prevent communication breakdowns, and streamline decision-making. ... All successful companies enjoy explosive periods of growth as represented by the steep part of the S-curve. When that growth starts to level off, the company is enjoying much success and generating much cash. It is at this point that management teams get comfortable, enjoying the momentum of their success. This is precisely when they should start to become uncomfortable and alert to new innovation possibilities. ... There is a natural tendency to avoid risk, but risk is an essential component of strategic innovation. The key is attacking that risk through the use of intelligent failure—failure that happens with a purpose and provides the insights needed for success. When implementing a major innovation initiative, intelligent failure is an essential part of systematically reducing the most critical risks—the risks that can cause the entire initiative to fail. Success comes from attacking the biggest risks first, addressing fundamental uncertainties early, and taking bite-sized risks through incremental proof-of-concept steps.


Building Real Enterprise AI Agents With Apache Flink

The common approach today is to stitch together a patchwork of disconnected systems: one for data streaming (like Apache Kafka), another for workflow orchestration, one for aggregating all the possible contextual data the agent might need and a separate application runtime for the agent’s logic. This “stitching” approach creates a system that is both operationally complex and technically fragile. Engineers are left managing a brittle architecture where data is handed off between systems, introducing significant latency at each step. This process often relies on polling or micro-batching, meaning the agent is always acting on slightly stale data. ... While Flink provides the perfect engine, the community recognized the need for better native support for agent-specific workflows. This led to Streaming Agents, designed to make Flink the definitive platform for building agents. Crucially, this is not another tool to stitch into your stack. It’s a native framework that directly extends Flink’s own DataStream and Table APIs, making agent development a first-class citizen within the Flink ecosystem. This native approach unlocks the most powerful benefit: the seamless integration of data processing and AI. Before, an engineer might have one Flink job to enrich data, which then writes to a message queue for a separate Python service to apply the AI logic. 

Daily Tech Digest - August 29, 2025


Quote for the day:

"Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it." -- Johann Wolfgang von Goethe


The incredibly shrinking shelf life of IT solutions

“Technology cycles are spinning faster and faster, and some solutions are evolving so fast, that they’re now a year-long bet, not a three- or five-year bet for CIOs,” says Craig Kane ... “We are living in a period of high user expectations. Every day is a newly hyped technology, and CIOs are constantly being asked how can we, the company, take advantage of this new solution,” says Boston Dynamics CIO Chad Wright. “Technology providers can move quicker today with better development tools and practices, and this feeds the demand that customers are creating.” ... Not every CIO is switching out software as quickly as that, and Taffet, Irish, and others say they’re certainly not seeing the shelf life for all software and solutions in their enterprise shrink. Indeed, many vendors are updated their applications with new features and functions to keep pace with business and market demands — updates that help extend the life of their solutions. And core solutions generally aren’t turning over any more quickly today than they did five or 10 years ago, Kearney’s Kane says. ... Montgomery says CIOs and business colleagues sometimes think the solutions they have in place are falling behind market innovations and, as a result, their business will fall behind, too. That may be the case, but they may just be falling for marketing hype, she says. Montgomery also cites the fast pace of executive turnover as contributing to the increasingly short shelf life of IT solutions. 


Resiliency in Fintech: Why System Design Matters More Than Ever

Cloud computing has transformed fintech. What once took months to provision can now be spun up in hours. Auto-scaling, serverless computing, and global distribution have enabled firms to grow without massive upfront infrastructure costs. Yet, cloud also changes the resilience equation. Outages at major CSPs — rare but not impossible — can cascade across entire industries. The Financial Stability Board (FSB) has repeatedly warned about “cloud concentration risk.” Regulators are exploring frameworks for oversight, including requirements for firms to maintain exit strategies or multi-cloud approaches. For fintech leaders, the lesson is clear: cloud-first doesn’t mean resilience-last. Building systems that are cloud-resilient (and in some cases cloud-agnostic) is becoming a strategic priority. ... Recent high-profile outages underline the stakes. Trading platforms freezing during volatile markets, digital banks leaving customers without access to funds, and payment networks faltering during peak shopping days all illustrate the cost of insufficient resilience. ... Innovation remains the lifeblood of fintech. But as the industry matures, resilience has become the new competitive differentiator. The firms that win will be those that treat system design as risk management, embedding high availability, regulatory compliance, and cloud resilience into their DNA. In a world where customer trust can be lost in minutes, resilience is not just good engineering.


AI cost pressures fuelling cloud repatriation

IBM thinks AI will present a bigger challenge than the cloud because it will be more pervasive with more new applications being built on it. Consequently, IT leaders are already nervous about the cost and value implications and are looking for ways to get ahead of the curve. Repeating the experience of cloud adoption, AI is being driven by business teams, not by back-office IT. AI is becoming a significant driver for shifting workloads back to private, on-premise systems. This is because data becomes the most critical asset, and Patel believes few enterprises are ready to give up their data to a third party at this stage. ... The cloud is an excellent platform for many workloads, just as there are certain workloads that run extremely well on a mainframe. The key is to understand workload placement: is my application best placed on a mainframe, on a private cloud or on a public cloud? As they start their AI journey, some of Apptio’s customers are not ready for their models, learning and intelligence – their strategic intellectual property – to sit in a public cloud. There are consequences when things go wrong with data, and those consequences can be severe for the executives concerned. So, when a third party suggests putting all of the customer, operational and financial data in one place to gain wonderful insights, some organisations are unwilling to do this if the data is outside their direct control. 


Finding connection and resilience as a CISO

To create stronger networks among CISOs, security leaders can join trusted peer groups like industry ISACs (Information Sharing and Analysis Centers) or associations within shared technology / compliance spaces like cloud, GRC, and regulatory. The protocols and procedures in these groups ensure members can have meaningful conversations without putting them or their organization at risk. ... Information sharing operates in tiers, each with specific protocols for data protection. Top tiers, involving entities like ISACs, the FBI, and DHS, have established protocols to properly share and safeguard confidential data. Other tiers may involve information and intelligence already made public, such as CVEs or other security disclosures. CISOs and their teams may seek assistance from industry groups, partnerships, or vendors to interpret current Indicators of Compromise (IOCs) and other remediation elements, even when public. Continuously improving vendor partnerships is crucial for managing platforms and programs, as strong partners will be familiar with internal operations while protecting sensitive information. ... Additionally, encouraging a culture of continuous learning and development, not just with the security team but broader technology and product teams, will empower employees, distribute expertise, and grow a more resilient and adaptable workforce.


Geopolitics is forcing the data sovereignty issue and it might just be a good thing

At London Tech Week recently UK Prime Minister Keir Starmer said that the way that war is being fought “has changed profoundly,” adding that technology and AI are now “hard wired” into national defense. It was a stark reminder that IT infrastructure management must now be viewed through a security lens and that businesses need to re-evaluate data management technologies and practices to ensure they are not left out in the cold. ... For many, public cloud services have created a false sense of flexibility. Moving fast is not the same as moving safely. Data localization, jurisdictional control, and security policy alignment are now critical to long-term strategy, not barriers to short-term scale. So where does that leave enterprise IT? Essentially, it leaves us with a choice - design for agility with control, or face disruption when the rules change. ... Sovereignty-aware infrastructure isn’t about isolation. It’s about knowing where your data is, who can access it, how it moves, and what policies govern it at each stage. That means visibility, auditability, and the ability to adjust without rebuilding every time a new compliance rule appears. A hybrid multicloud approach gives organizations the flexibility while keeping data governance central. It’s not about locking into one cloud provider or building everything on-prem. 


Recalibrating Hybrid Cloud Security in the Age of AI: The Need for Deep Observability

As AI further fuels digital transformation, the security landscape of hybrid cloud infrastructures is becoming more strained. As such, security leaders are confronting a paradox. Cloud environments are essential for scaling operations, but they also present new attack vectors. ... Amid these challenges, some organisations are realising that their traditional security tools are insufficient. The lack of visibility into hybrid cloud environments is identified as a core issue, with 60 percent of Australian leaders expressing a lack of confidence in their current tools to detect breaches effectively. The call for "deep observability" has never been louder. The research underscores the the need for having a comprehensive, real-time view into all data in motion across the enterprise to improve threat detection and response. Deep observability, combining metadata, network packets, and flow data has become a cornerstone of hybrid cloud security strategies. It provides security teams with actionable insights into their environments, allowing them to spot potential threats in real time. In fact, 89 percent of survey respondents agree that deep observability is critical to securing AI workloads and managing complex hybrid cloud infrastructures. Being proactive with this approach is seen as a vital way to bridge the visibility gap and ensure comprehensive security coverage across hybrid cloud environments.


Financial fraud is widening its clutches—Can AI stay ahead?

Today, organised crime groups are running call centres staffed with human trafficking victims. These victims execute “romance baiting” schemes that combine emotional manipulation with investment fraud. The content they use? AI-generated. The payments they request? ... Fraud attempts rose significantly in a single quarter after COVID hit, and the traditional detection methods fell apart. This is why modern fraud detection systems had to evolve. Now, these systems can analyse thousands of transactions per minute, assigning risk scores that update in real-time. There was no choice. Staying in the old regime of anti-fraud systems was no longer an option when static rules became obsolete almost overnight. ... The real problem isn’t the technology itself. It’s the pace of adoption by bad actors. Stop Scams UK found something telling: While banks have limited evidence of large-scale AI fraud today, technology companies are already seeing fake AI-generated content and profiles flooding their platforms. ... When AI systems learn from historical data that reflects societal inequalities, they can perpetuate discrimination under the guise of objective analysis. Banks using biased training data have inadvertently created systems that disproportionately flag certain communities for additional scrutiny. This creates moral problems alongside operational and legal risks.


Data security and compliance are non-negotiable in any cloud transformation journey

Enterprises today operate in a data-intensive environment that demands modern infrastructure, built for speed, intelligence, and alignment with business outcomes. Data modernisation is essential to this shift. It enables real-time processing, improves data integrity, and accelerates decision-making. When executed with purpose, it becomes a catalyst for innovation and long-term growth. ... The rise of generative AI has transformed industries by enhancing automation, streamlining processes, and fostering innovation. According to a recent NASSCOM report, around 27% of companies already have AI agents in production, while another 31% are running pilots. ... Cloud has become the foundation of digital transformation in India, driving agility, resilience, and continuous innovation across sectors. Kyndryl is expanding its capabilities in the market to support this momentum. This includes strengthening our cloud delivery centres and expanding local expertise across hyperscaler platforms. ... Strategic partnerships are central to how we co-innovate and deliver differentiated outcomes for our clients. We collaborate closely with a broad ecosystem of technology leaders to co-create solutions that are rooted in real business needs. ... Enterprises in India are accelerating their cloud journeys, demanding solutions that combine hyperscaler innovation with deep enterprise expertise. 


Digital Transformation Strategies for Enterprise Architects

Customer experience must be deliberately architected to deliver relevance, consistency, and responsiveness across all digital channels. Enterprise architects enable this by building composable service layers that allow marketing, commerce, and support platforms to act on a unified view of the customer. Event-driven architectures detect behavior signals and trigger automated, context-aware experiences. APIs must be designed to support edge responsiveness while enforcing standards for security and governance. ... Handling large datasets at the enterprise level requires infrastructure that treats metadata, lineage, and ownership as first-class citizens. Enterprise architects design data platforms that surface reliable, actionable insights, built on contracts that define how data is created, consumed, and governed across domains. Domain-oriented ownership via data mesh ensures accountability, while catalogs and contracts maintain enterprise-wide discoverability. ... Architectural resilience starts at the design level. Modular systems that use container orchestration, distributed tracing, and standardized service contracts allow for elasticity under pressure and graceful degradation during failure. Architects embed durability into operations through chaos engineering, auto-remediation policies, and blue-green or canary deployments. 


Unchecked and unbound: How Australian security teams can mitigate Agentic AI chaos

Agentic AI systems are collections of agents working together to accomplish a given task with relative autonomy. Their design enables them to discover solutions and optimise for efficiency. The result is that AI agents are non-deterministic and may behave in unexpected ways when accomplishing tasks, especially when systems interoperate and become more complex. As AI agents seek to perform their tasks efficiently, they will invent workflows and solutions that no human ever considered. This will produce remarkable new ways of solving problems, and will inevitably test the limits of what's allowable. The emergent behaviours of AI agents, by definition, exceed the scope of any rules-based governance because we base those rules on what we expect humans to do. By creating agents capable of discovering their own ways of working, we're opening the door to agents doing things humans have never anticipated. ... When AI agents perform actions, they act on behalf of human users or use an identity assigned to them based on a human-centric AuthN and AuthZ system. That complicates the process of answering formerly simple questions, like: Who authored this code? Who initiated this merge request? Who created this Git commit? It also prompts new questions, such as: Who told the AI agent to generate this code? What context did the agent need to build it? What resources did the AI have access to?