Daily Tech Digest - September 06, 2025


Quote for the day:

"Average leaders raise the bar on themselves; good leaders raise the bar for others; great leaders inspire others to raise their own bar." -- Orrin Woodward


Why Most AI Pilots Never Take Flight

The barrier is not infrastructure, regulation or talent but what the authors call "learning gap." Most enterprise AI systems cannot retain memory, adapt to feedback or integrate into workflows. Tools work in isolation, generating content or analysis in a static way, but fail to evolve alongside the organizations that use them. For executives, the result is a sea of proofs of concept with little business impact. "Chatbots succeed because they're easy to try and flexible, but fail in critical workflows due to lack of memory and customization," the report said. Many pilots never survive this transition, Mina Narayanan, research analyst at the Center for Security and Emerging Technology, told Information Security Media Group. ... The implications of this shadow economy are complex. On one hand, it shows clear employee demand, as workers gravitate toward flexible, responsive and familiar tools. On the other, it exposes enterprises to compliance and security risks. Corporate lawyers and procurement officers interviewed in the report admitted they rely on ChatGPT for drafting or analysis, even when their firms purchased specialized tools costing tens of thousands of dollars. When asked why they preferred consumer tools, their answers were consistent: ChatGPT produced better outputs, was easier to iterate with and required less training. "Our purchased AI tool provided rigid summaries with limited customization options," one attorney told the researchers. 


Breaking into cybersecurity without a technical degree: A practical guide

Think of cybersecurity as a house. While penetration testers and security engineers focus on building stronger locks and alarm systems, GRC professionals ensure the house has strong foundations, insurance policies and meets all building regulations. ... Governance involves creating and maintaining the policies, procedures and frameworks that guide an organisation’s security decisions. Risk management focuses on identifying potential threats, assessing their likelihood and impact, then developing strategies to mitigate or accept those risks. ... Certifications alone will not land you a role. This is not understood by most people wanting to take this path. Understanding key frameworks provides the practical knowledge that makes certifications meaningful. ISO 27001, the international standard for information security management systems, appears in most GRC job descriptions. I spent considerable time learning not only what ISO 27001 requires, but how organizations implement its controls in practice. The NIST Cybersecurity Framework (CSF) deserves equal attention. NIST CSF’s six core functions — govern, identify, protect, detect, respond and recover — provide a logical structure for organising security programs that business stakeholders can understand. Personal networks proved more valuable than any job board or recruitment agency. 


To Survive Server Crashes, IT Needs a 'Black Box'

Security teams utilize Security Information and Event Management (SIEM) systems, and DevOps teams have tracing tools. However, infrastructure teams still lack an equivalent tool: a continuously recorded, objective account of system interdependencies before, during, and after incidents. This is where Application Dependency Mapping (ADM) solutions come into play. ADM continuously maps the relationships between servers, applications, services, and external dependencies. Instead of relying on periodic scans or manual documentation, ADM offers real-time, time-stamped visibility. This allows IT teams to rewind their environment to any specific point in time, clearly identifying the connections that existed, which systems interacted, and how traffic flowed during an incident. ... Retrospective visibility is emerging as a key focus in IT infrastructure management. As hybrid and multi-cloud environments become increasingly complex, accurately diagnosing failures after they occur is essential for maintaining uptime, security, and business continuity. IT professionals must monitor systems in real time and learn how to reconstruct the complete story when failures happen. Similar to the aviation industry, which acknowledges that failures can occur and prepares accordingly, the IT sector must shift from reactive troubleshooting to a forensic-level approach to visibility.


Vibe coding with GitHub Spark

The GitHub Spark development space is a web application with three panes. The middle one is for code, the right one shows the running app (and animations as code is being generated), and the left one contains a set of tools. These tools offer a range of functions, first letting you see your prompts and skip back to older ones if you don’t like the current iteration of your application. An input box allows you to add new prompts that iterate on your current generated code, with the ability to choose a screenshot or change the current large language model (LLM) being used by the underlying GitHub Copilot service. I used the default choice, Anthropic’s Claude Sonnet 3.5. As part of this feature, GitHub Spark displays a small selection of possible refinements that take concepts related to your prompts and suggest enhancements to your code. Other controls provide ways to change low-level application design options, including the current theme, font, or the style used for application icons. Other design tools allow you to tweak the borders of graphical elements, the scaling factors used, and to pick an application icon for an install of your code based on Progressive Web Apps (PWAs). GitHub Spark has a built-in key/value store for application data that persists between builds and sessions. The toolbar provides a list of the current key and the data structure used for the value store. 


Legacy IT Infrastructure: Not the Villain We Make It Out to Be

In the realm of IT infrastructure, legacy can often feel like a bad word. No one wants to be told their organization is stuck with legacy IT infrastructure because it implies that it's old or outdated. Yet, when you actually delve into the details of what legacy means in the context of servers, networking, and other infrastructure, a more complex picture emerges. Legacy isn't always bad. ... it's not necessarily the case that a system is bad, or in dire need of replacement, just because it fits the classic definition of legacy IT. There's an argument to be made that, in many cases, legacy systems are worth keeping around. For starters, most legacy infrastructure consists of tried-and-true solutions. If a business has been using a legacy system for years, it's a reliable investment. It may not be as optimal from a cost, scalability, or security perspective as a more modern alternative. But in some cases, this drawback is outweighed by the fact that — unlike a new, as-yet-unproven solution — legacy systems can be trusted to do what they claim to do because they've already been doing it for years. The fact that legacy systems have been around for a while also means that it's often easy to find engineers who know how to work with them. Hiring experts in the latest, greatest technology can be challenging, especially given the widespread IT talent shortage. 



How to Close the AI Governance Gap in Software Development

Despite the advantages, only 42 percent of developers trust the accuracy of AI output in their workflows. In our observations, this should not come as a surprise – we’ve seen even the most proficient developers copying and pasting insecure code from large language models (LLMs) directly into production environments. These teams are under immense pressure to produce more lines of code faster than ever. Because security teams are also overworked, they aren’t able to provide the same level of scrutiny as before, causing overlooked and possibly harmful flaws to proliferate. The situation brings the potential for widespread disruption: BaxBench oversees a coding benchmark to evaluate LLMs for accuracy and security, and has reported that LLMs are not yet capable of generating deployment-ready code. ... What’s more, they often lack the expertise – or don’t even know where to begin – to review and validate AI-enabled code. This disconnect only further elevates their organization’s risk profile, exposing governance gaps. To keep everything from spinning out of control, chief information security officers (CISOs) must work with other organizational leaders to implement a comprehensive and automated governance plan that enforces policies and guardrails, especially within the repository workflow.


The Complexity Crisis: Why Observability Is the Foundation of Digital Resilience

End-to-end observability is evolving beyond its current role in IT and DevOps to become a foundational element of modern business strategy. In doing so, observability plays a critical role in managing risk, maintaining uptime, and safeguarding digital trust. Observability also enables organizations to proactively detect anomalies before they escalate into outages, quickly pinpoint root causes across complex, distributed systems, and automate response actions to reduce mean time to resolution (MTTR). The result is faster, smarter and more resilient operations, giving teams the confidence to innovate without compromising system stability, a critical advantage in a world where digital resilience and speed must go hand in hand. ... As organizations increasingly adopt generative and agentic AI to accelerate innovation, they also expose themselves to new kinds of risks. Agentic AI can be configured to act independently, making changes, triggering workflows, or even deploying code without direct human involvement. This level of autonomy can boost productivity, but it also introduces serious challenges. ... Tomorrow’s industry leaders will be distinguished by their ability to adopt and adapt to new technologies, embracing agentic AI but recognizing the heightened risk exposure and compliance burdens. Leaders will need to shift from reactive operations to proactive and preventative operations.


AI and the end of proof

Fake AI images can lie. But people lie, too, saying real images are fake. Call it the ‘liar’s dividend.’ Call it a crisis of confidence. ... In 2019, when deepfake audio and video became a serious problem, legal experts Bobby Chesney and Danielle Citron came up with the term “liar’s dividend” to describe the advantage a dishonest public figure gets by calling real evidence “fake” in a time when AI-generated content makes people question what they see and hear. False claims of deepfakes can be just as harmful as real deepfakes during elections. ... The ability to make fakes will be everywhere, along with the growing awareness that visual information can be easily and convincingly faked. That awareness makes false claims that something is AI-made more believable. The good news is that Gemini 2.5 Flash Image stamps every image it makes or edits with a hidden SynthID watermark for AI identification after common changes like resizing, rotation, compression, or screenshot copies. Google says this ID system covers all outputs and ships with the new model across the Gemini API, Google AI Studio, and Vertex AI. SynthID for images changes pixels without being seen, but a paired detector can recognize it later, using one neural network to embed the pattern and another to spot it. The detector reports levels like “present,” “suspected,” or “not detected,” which is more helpful than a fragile yes/no that fails after small changes.


Beyond the benchmarks: Understanding the coding personalities of different LLMs

Though the models did have these distinct personalities, they also shared similar strengths and weaknesses. The common strengths were that they quickly produced syntactically correct code, had solid algorithmic and data structure fundamentals, and efficiently translated code to different languages. The common weaknesses were that they all produced a high percentage of high-severity vulnerabilities, introduced severe bugs like resource leaks or API contract violations, and had an inherent bias towards messy code. “Like humans, they become susceptible to subtle issues in the code they generate, and so there’s this correlation between capability and risk introduction, which I think is amazingly human,” said Fischer. Another interesting finding of the report is that newer models may be more technically capable, but are also more likely to generate risky code. ... In terms of security, high and low reasoning modes eliminate common attacks like path-traversal and injection, but replace them with harder-to-detect flaws, like inadequate I/O error-handling. ... “We have seen the path-traversal and injection become zero percent,” said Sarkar. “We can see that they are trying to solve one sector, and what is happening is that while they are trying to solve code quality, they are somewhere doing this trade-off. Inadequate I/O error-handling is another problem that has skyrocketed. ...”


Agentic AI Isn’t a Product – It’s an Integrated Business Strategy

Any leader considering agentic AI should have a clear understanding of what it is (and what it’s not!), which can be difficult considering many organizations are using the term in different ways. To understand what makes the technology so transformative, I think it’s helpful to contract it with the tools many manufacturers are already familiar with. ... Agentic AI doesn’t just help someone do a task. It owns that task, end-to-end, like a trusted digital teammate. If a traditional AI solution is like a dashboard, agentic AI is more like a co-worker who has deep operational knowledge, learns fast, doesn’t need a break and knows exactly when to ask for help. This is also where misconceptions tend to creep in. Agentic AI isn’t a chatbot with a nicer interface that happens to use large language models, nor is it a one-size-fits-all product that slots in after implementation. It’s a purpose-built, action-oriented intelligence that lives inside your operations and evolves with them. ... Agentic AI isn’t a futuristic technology, either. It’s here and gaining momentum fast. According to Capgemini, the number of organizations using AI agents has doubled in the past year, with production-scale deployments expected to reach 48% by 2025. The technology’s adoption trajectory is a sharp departure from traditional AI technologies.

Daily Tech Digest - September 05, 2025


Quote for the day:

"Little minds are tamed and subdued by misfortune; but great minds rise above it." -- Washington Irving


Understanding Context Engineering: Principles, Practices, and Its Distinction from Prompt Engineering

Context engineering is the strategic design, management, and delivery of relevant information—or “context”—to AI systems in order to guide, constrain, or enhance their behavior. Unlike prompt engineering, which primarily focuses on crafting effective input prompts to direct model outputs, context engineering involves curating, structuring, and governing the broader pool of information that surrounds and informs the AI’s decision-making process. In practice, context engineering requires an understanding of not only what the AI should know at a given moment but also how information should be prioritized, retrieved, and presented. It encompasses everything from assembling relevant documents and dialogue history to establishing policies for data inclusion and exclusion. ...  While there is some overlap between the two domains, context engineering and prompt engineering serve distinct purposes and employ different methodologies. Prompt engineering is concerned with the formulation of the specific text—the “prompt”—that is provided to the model as an immediate input. It is about phrasing questions, instructions, or commands in a way that elicits the desired behavior or output from the AI. Successful prompt engineering involves experimenting with wording, structure, and sometimes even formatting to maximize the performance of the language model on a given task.


How AI and Blockchain Are Transforming Tenant Verification in India

While artificial intelligence provides both intelligence and speed, Blockchain technology provides the essential foundation of trust and security. Blockchain functions as a permanent digital record – meaning that once information is set, it can’t be changed or deleted by third parties. This feature is particularly groundbreaking for ensuring a safe and clear rental history. Picture this: the rental payments and lease contracts of your tenants could all be documented as ‘smart contracts’ using Blockchain technology. ... The combination of AI and Blockchain signifies a groundbreaking transformation, enabling tenants to create ‘self-sovereign identities’ on the Blockchain — digital wallets that hold their verified credentials, which they fully control. When searching for rental properties, tenants can conveniently provide prospective landlords with access to certain details about themselves, such as their history of timely payments and police records. AI leverages secure and authentic Blockchain data to produce an immediate risk score for landlords to assess, ensuring a quick and reliable evaluation. This cohesive approach guarantees that AI outcomes are both rapid and trustworthy, while the decentralized nature of Blockchain safeguards tenant privacy by removing the necessity for central databases that may become susceptible over time.


Adversarial AI is coming for your applications

New research from Cato Networks threat intelligence report, revealed how threat actors can use a large language model jailbreak technique, known as an immersive world attack, to get AI to create infostealer malware for them: a threat intelligence researcher with absolutely no malware coding experience managed to jailbreak multiple large language models and get the AI to create a fully functional, highly dangerous, password infostealer to compromise sensitive information from the Google Chrome web browser. The end result was malicious code that successfully extracted credentials from the Google Chrome password manager. Companies that create LLMs are trying to put up guardrails, but clearly GenAI can make malware creation that much easier. AI-generated malware, including polymorphic malware, essentially makes signature-based detections nearly obsolete. Enterprises must be prepared to protect against hundreds, if not thousands, of malware variants. ... Enterprises can increase their protection by embedding security directly into applications at the build stage: this involves investing in embedded security that is mapped to OWASP controls; such as RASP, advanced Whitebox cryptography, and granular threat intelligence. IDC research shows that organizations protecting mobile apps often lack a solution to test them efficiently and effectively. 


Top Pitfalls to Avoid When Responding to Cyber Disaster

Moving too quickly following an attack can also prompt staff to respond to an intrusion without first fully understanding the type of ransomware that was used. Not all ransomware is created equal and knowing if you were a victim of locker ransomware, double extortion, ransomware-as-a-service, or another kind of attack can make all the difference in how to respond because the goal of the attacker is different for each. ... The first couple hours after a ransomware incident is identified are critical. In those immediate hours, work quickly to identify and isolate affected systems and disconnect compromised devices from the network to prevent the ransomware from spreading further. Don’t forget to also preserve forensic evidence as you go, such as screenshots, relevant logs, anything to inform future law enforcement investigations or legal action. Once that has been done, notify the key stakeholders and the cyber insurance provider. ... After the dust settles, analyze how the attack was able to occur and put in place fixes to keep it from happening again. Identify the initial access point and method, and map how the threat actor moved through the network. What barriers were they able to move past, and which held them back? Are there areas where more segmentation is needed to reduce the attack surface? Do any security workflows or policies need to be modified?


How to reclaim control over your online shopping data

“While companies often admit to sharing user data with third parties, it’s nearly impossible to track every recipient. That lack of control creates real vulnerabilities in data privacy management. Very few organizations thoroughly vet their third-party data-sharing practices, which raises accountability concerns and increases the risk of breaches,” said Ian Cohen, CEO of LOKKER. The criminal marketplace for stolen data has exploded in recent years. In 2024, over 6.8 million accounts were listed for sale, and by early 2025, nearly 2.5 million stolen accounts were available at one point. ... Even limited purchase information can prove valuable to criminals. A breach exposing high-value transactions, for example, may suggest a buyer’s financial status or lifestyle. When combined with leaked addresses, that data can help criminals identify and target individuals more precisely, whether for fraud, identity theft, or even physical theft. ... One key mechanism is the right to be forgotten, a legal principle allowing individuals to request the removal of their personal data from online platforms. The European Union’s GDPR is the strongest example of this principle in action. While not as comprehensive as the GDPR, the US has some privacy protections, such as the California Consumer Privacy Act (CCPA), which allow residents to access or delete their personal data.


Mind the Gap: Agentic AI and the Risks of Autonomy

The ink is barely dry on generative AI and AI agents, and now we have a new next big thing: agentic AI. Sounds impressive. By the time this article comes out, there’s a good chance that agentic AI will be in the rear-view mirror and we’ll all be chasing after the next new big thing. Anyone for autonomous generative agentic AI agent bots? ... Some things on the surface seem more irresponsible than others, but for some, agentic AI apparently not so much. Debugging large language models, AI agents, and agentic AI, as well as implementing guardrails are topics for another time, but it’s important to recognize that companies are handing over those car keys. Willingly. Enthusiastically. Would you put that eighth grader in charge of your marketing department? Of autonomously creating collateral that goes out to your customers without checking it first? Of course not. ... We want AI agents and agentic AI to make decisions, but we must be intentional about the decisions they are allowed to make. What are the stakes personally, professionally, or for the organization? What is the potential liability when something goes wrong? And something will go wrong. Something that you never considered going wrong will go wrong. And maybe think about the importance of the training data. Isn’t that what we say when an actual person does something wrong? “They weren’t adequately trained.” Same thing here.


How software engineers and team leaders can excel with artificial intelligence

As long as software development and AI designers continue to fall prey to the substitution myth, we’ll continue to develop systems and tools that, instead of supposedly making humans lives easier/better, will require unexpected new skills and interventions from humans that weren’t factored into the system/tool design ... Software development covers a lot of ground, from understanding requirements, architecting, designing, coding, writing tests, code review, debugging, building new skills and knowledge, and more. AI has now reached a point where it can automate or speed up almost every part of the process. This is an exciting time to be a builder. A lot of the routine, repetitive, and frankly boring parts of the job, the "cognitive grunt work", can now be handled by AI. Developers especially appreciate the help in areas like generating test cases, reviewing code, and writing documentation. When those tasks are off our plate, we can spend more time on the things that really add value: solving complex problems, designing great systems, thinking strategically, and growing our skills. ... The elephant in the room is "whether AI will take over my job one day?". Until this year, I always thought no, but the recent technological advancements and new product offerings in this space are beginning to change my mind. The reality is that we should be prepared for AI to change the software development role as we know it.


6 browser-based attacks all security teams should be ready for in 2025

Phishing tooling and infrastructure has evolved a lot in the past decade, while the changes to business IT means there are both many more vectors for phishing attack delivery, and apps and identities to target. Attackers can deliver links over instant messenger apps, social media, SMS, malicious ads, and using in-app messenger functionality, as well as sending emails directly from SaaS services to bypass email-based checks. Likewise, there are now hundreds of apps per enterprise to target, with varying levels of account security configuration. ... Like modern credential and session phishing, links to malicious pages are distributed over various delivery channels and using a variety of lures, including impersonating CAPTCHA, Cloudflare Turnstile, simulating an error loading a webpage, and many more. The variance in lure, and differences between different versions of the same lure, can make it difficult to fingerprint and detect based on visual elements alone. ... Preventing malicious OAuth grants being authorized requires tight in-app management of user permissions and tenant security settings. This is no mean feat when considering the 100s of apps in use across the modern enterprise, many of which are not centrally managed by IT and security teams


JSON Config File Leaks Azure ActiveDirectory Credentials

"The critical risk lies in the fact that this file was publicly accessible over the Internet," according to the post. "This means anyone — from opportunistic bots to advanced threat actors — could harvest the credentials and immediately leverage them for cloud account compromise, data theft, or further intrusion." ... To exploit the flaw, an attacker can first use the leaked ClientId and ClientSecret to authenticate against Azure AD using the OAuth2 Client Credentials flow to acquire an access token. Once this is acquired, the attacker then can send a GET request to the Microsoft Graph API to enumerate users within the tenant. This allows them to collect usernames and emails; build a list for password spraying or phishing; and/or identify naming conventions and internal accounts, according to the post. The attacker also can query the Microsoft Graph API to enumerate OAuth2 permission grants within the tenant, revealing which applications have been authorized and what scopes, or permissions, they hold. Finally, the acquired token allows an attacker to use group information to identify privilege clusters and business-critical teams, thus exposing organizational structure and identifying key targets for compromise, according to the post. ... "What appears to be a harmless JSON configuration file can in reality act as a master key to an organization’s cloud kingdom," according to the post.


Data centers are key to decarbonizing tech’s AI-fuelled supply chain

Data center owners and operators are uniquely positioned to step up and play a larger, more proactive role in this by pushing back on tech manufacturers in terms of the patchy emissions data they provide, while also facilitating sustainable circular IT product lifecycle management/disposal solutions for their users and customers. ... The hard truth, however, is that any data center striving to meet its own decarbonization goals and obligations cannot do so singlehandedly. It’s largely beholden to the supply chain stakeholders upstream. At the same time, their customers/users tend to accept ever shortening usage periods as the norm. Often, they overlook the benefits of achieving greater product longevity and optimal cost of ownership through the implementation of product maintenance, refurbishment, and reuse programmes. ... As a focal point for the enablement of the digital economy, data centers are ideally placed to take a much more active role: by lobbying manufacturers, educating users and customers about the necessity and benefits of changing conventional linear practices in favour of circular IT lifecycle management and recycling solutions. Such an approach will not only help decarbonize data centers themselves but the entire tech industry supply chain – by reducing emissions.

Daily Tech Digest - September 03, 2025


Quote for the day:

“The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things.” -- Ronald Reagan



Understanding Problems in the Data Supply Chain: A Q&A with R Systems’ AI Director Samiksha Mishra

Think of data as moving through a supply chain: it’s sourced, labeled, cleaned, transformed, and then fed into models. If bias enters early – through underrepresentation in data collection, skewed labeling, or feature engineering – it doesn’t just persist but multiplies as the data moves downstream. By the time the model is trained, bias is deeply entrenched, and fixes can only patch symptoms, not address the root cause. Just like supply chains for physical goods need quality checks at every stage, AI systems need fairness validation points throughout the pipeline to prevent bias from becoming systemic. ... The key issue is that a small representational bias can be significantly amplified across the AI data supply chain due to reusability and interdependencies. When a biased dataset is reused, its initial flaw is propagated to multiple models and contexts. This is further magnified during preprocessing, as methods like feature scaling and augmentation can encode a biased feature into multiple new variables, effectively multiplying its weight. ... One effective way to integrate validation layers and bias filters into AI systems without sacrificing speed is to design them as lightweight checkpoints throughout the pipeline rather than heavy post-hoc add-ons. At the data stage, simple distributional checks such as χ² tests or KL-divergence can flag demographic imbalances at low computational cost. 



Hackers Manipulate Claude AI Chatbot as Part of at Least 17 Cyber Attacks

While AI’s use in hacking has largely been a case of hype over actual threat to present, this new development is a concrete indicator that it is at minimum now substantially lowering the threshold for non-technical actors to execute viable cyber attacks. It is also clearly capable of speeding up and automating certain common aspects of attacks for the more polished professional hackers, increasing their output capability during windows in which they have the element of surprise and novelty. While the GTG-2002 activity is the most complex thus far, the threat report notes the Claude AI chatbot has also been successfully used for more individualized components of various cyber attacks. This includes use by suspected North Korean state-sponsored hackers as part of their remote IT worker scams, to include not just crafting detailed personas but also taking employment tests and doing day-to-day work once hired. Another highly active party in the UK has been using Claude to develop individual ransomware tools with sophisticated capabilities and sell them on underground forums, at a price of $400 to $1,200 each. ... Anthropic says that it has responded to the cyber attacks by adding a tailored classifier specifically for the observed activity and a new detection method to ensure similar activity is captured by the standard security pipeline. 


Agentic AI: Storage and ‘the biggest tech refresh in IT history’

The interesting thing about agentic infrastructure is that agents can ultimately work across a number of different datasets, and even in different domains. You have kind of two types of agents – workers, and other agents, which are supervisors or supervisory agents. So, maybe I want to do something simple like develop a sales forecast for my product while reviewing all the customer conversations and the different databases or datasets that could inform my forecast. Well, that would take me to having agents that work on and process a number of different independent datasets that may not even be in my datacentre.  ... So, anything that requires analytics requires a data warehouse. Anything that requires an understanding of unstructured data not only requires a file system or an object storage system, but it also requires a vector database to help AI agents understand what’s in those file systems through a process called retrieval augmented generative AI. The first thing that needs to be wrestled down is a reconciliation of this idea that there’s all sorts of different data sources, and all of them need to be modernised or ready for the AI computing that is about to hit these data sources. ... The first thing I would say is that there are best practices out in the market that should definitely be adhered to. 


Tech leaders: Are you balancing AI transformation with employee needs?

On the surface, it might seem naïve for companies to talk about AI building people up and improving jobs when there’s so much negative news about its potential impact on employment. For example, Ford CEO Jim Farley recently predicted that AI will replace half of all white-collar workers in the US. Also, Fiverr CEO Micha Kaufman sent a memo to his team in which he said, “AI is coming for your job. Heck, it’s coming for my job, too. This is a wake-up call. It doesn’t matter if you’re a programmer, designer, product manager, data scientist, lawyer, customer support rep, salesperson, or a finance person. AI is coming for you.” Several tech companies like Google, Microsoft, Amazon, and Salesforce have also been talking about how much of their work is already being done by AI. Of course, tech executives could just be hyping the technology they sell. But not all AI-related layoffs may actually be due to AI. ... AI, especially agentic AI, is changing the nature of work, and how companies will need to be organized, says Mary Alice Vuicic, chief people officer at Thomson Reuters. “Many companies ripped up their AI plans as agentic AI came to the forefront,” she says, as it’s moved on from being an assistant to being a team that works together to accomplish delegated tasks. This has the potential for unprecedented productivity improvements, but also unprecedented opportunities for augmentation, expansion, and growth. 


When rivals come fishing: What keeps talent from taking the bait

Organisations can and do protect themselves with contracts—non-compete agreements, non-solicitation rules, confidentiality policies. They matter because they protect sensitive knowledge and prevent rivals from taking shortcuts. But they are not the same as retention. An employee with ambition, if disengaged, will eventually walk. ... If money were the sole reason employees left, the problem would be simpler. Counter-offers would solve it, at least temporarily. But every HR leader knows the story: a high performer accepts a lucrative counter-offer, only to resign again six months later. The issue lies elsewhere—career stagnation, lack of recognition, weak culture, or a disconnect with leadership. ... What works instead is open dialogue, competitive but fair rewards, and most importantly, visible career pathways. Employees, she stresses, need to feel that their organisation is invested in their long-term development, not just scrambling to keep them for another year. Tiwari also highlights something companies often neglect: succession planning. By identifying and nurturing future leaders early, organisations create continuity and reduce the shock when someone does leave. Alongside this, clear policies and awareness about confidentiality ensure that intellectual property remains protected even in times of churn. The recent frenzy of AI talent raids among global tech giants is an extreme example of this battle. 



Agentic AI: A CISO’s security nightmare in the making?

CISOs don’t like operating in the dark, and this is one of the risks agentic AI brings. It can be deployed autonomously by teams or even individual users through a variety of applications without proper oversight from security and IT departments. This creates “shadow AI agents” that can operate without controls such as authentication, which makes it difficult to track their actions and behavior. This in turn can pose significant security risks, because unseen agents can introduce vulnerabilities. ... Agentic AI introduces the ability to make independent decisions and act without human oversight. This capability presents its own cybersecurity risk by potentially leaving organizations vulnerable. “Agentic AI systems are goal-driven and capable of making decisions without direct human approval,” Joyce says. “When objectives are poorly scoped or ambiguous, agents may act in ways that are misaligned with enterprise security or ethical standards.” ... Agents often collaborate with other agents to complete tasks, resulting in complex chains of communication and decision-making, PwC’s Joyce says. “These interactions can propagate sensitive data in unintended ways, creating compliance and security risks,” he says. ... Many early stage agents rely on brittle or undocumented APIs or browser automation, Mayham says. “We’ve seen cases where agents leak tokens via poorly scoped integrations, or exfiltrate data through unexpected plugin chains. The more fragmented the vendor stack, the bigger the surface area for something like this to happen,” he says. 


How To Get The Best Out Of People Without Causing Burnout At Work

Comfort zones feel safe, but they also limit growth. Employees who stick with what they know may appear steady, but eventually they stagnate. Leaders who let people stay in their comfort zones for too long risk creating teams that lack adaptability. At the same time, pushing too aggressively can backfire. People who are stretched too far too quickly often feel stress and that drains motivation. This is when burnout at work begins. The real challenge is knowing how to respect comfort zones while creating enough stretch to build confidence. ... Gallup’s research shows that employees who use their strengths daily are six times more likely to be engaged. Tom Rath, co-author of StrengthsFinder, told me that leaning into natural talents is often the fastest path to confidence and performance gains. At the same time, he cautioned me against the idea that we should only focus on strengths. He said it is just as reckless to ignore weaknesses as it is to ignore strengths. His point was that leaders need balance. Too much time spent on weaknesses drains confidence, but avoiding them altogether prevents people from growing. ... It is not always easy to tell if resistance is fear or indifference. Fear usually comes with visible anxiety. The employee avoids the task but also worries about it. Laziness looks more like indifference with no visible discomfort. Leaders can uncover the difference by asking questions. If it is fear, support and small steps can help. If it is indifference, accountability and clear expectations may be the solution. 


IT Leadership Takes on AGI

“We think about AGI in terms of stepwise progress toward machines that can go beyond visual perception and question answering to goal-based decision-making,” says Brian Weiss, chief technology officer at hyperautomation and enterprise AI infrastructure provider Hyperscience, in an email interview. “The real shift comes when systems don’t just read, classify and summarize human-generated document content, but when we entrust them with the ultimate business decisions.” ... OpenAI’s newly released GPT-5 isn’t AGI, though it can purportedly deliver more useful responses across different domains. Tal Lev-Ami, CTO and co-founder of media optimization and visual experience platform provider Cloudinary, says “reliable” is the operative word when it comes to AGI. ... “We may see impressive demonstrations sooner, but building systems that people can depend on for critical decisions requires extensive testing, safety measures, and regulatory frameworks that don't exist yet,” says Bosquez in an email interview. ... Artificial narrow intelligence or ANI (what we’ve been using) still isn’t perfect. Data is often to blame, which is why there’s a huge push toward AI-ready data. Yet, despite the plethora of tools available to manage data and data quality, some enterprises are still struggling. Without AI-ready data, enterprises invite reliability issues with any form of AI. “Today’s systems can hallucinate or take rogue actions, and we’ve all seen the examples. 


How Causal Reasoning Addresses the Limitations of LLMs in Observability

A new class of AI-based observability solutions built on LLMs is gaining traction as they promise to simplify incident management, identify root causes, and automate remediation. These systems sift through high-volume telemetry, generate natural-language summaries based on their findings, and propose configuration or code-level changes. Additionally, with the advent of agentic AI, remediation workflows can be automated to advance the goal of self-healing environments. However, such tools remain fundamentally limited in their ability to perform root-cause analysis for modern applications. ... In observability contexts, LLMs can interpret complex logs and trace messages, summarize high-volume telemetry, translate natural-language queries into structured filters, and synthesize scripts or configuration changes to support remediation. Most LLM solutions rely on proprietary providers such as OpenAI and Anthropic, whose training data is opaque and often poorly aligned with specific codebases or deployment environments. More fundamentally, LLMs can only produce text.  ... Agentic AI shifts observability workflows from passive diagnostics to active response by predicting failure paths, initiating remediations, and executing tasks such as service restarts, configuration rollbacks, and state validation.


The Future of Work Is Human: Insights From Workday and Deloitte Leader

While AI can do many things, Chalwin acknowledges, "it can't replace, especially as a leader, that collaboration with your team, ethical decision making, creativity and strategic thinking.” But what it can do is free up time from more manual tasks, allowing people to focus on more impactful work. When asked about shifting focus from traditional training to creating opportunities for adaptation and innovation, Zucker emphasized the value of determining the balance of empowering people and giving them time and access to new capabilities to develop new skills. She noted, "People need to feel comfortable with trying things.” This requires helping the workforce understand how to make decisions, be creative, and trust the integrity of the tools and data.... “We’re all on a path of continuous learning.” She remembers leadership development class where participants were encouraged to "try it, and try it again" with AI tools. This environment fosters understanding and challenges individuals to apply AI in their daily work, enabling the workforce to evolve and continually bolster skills. Chalwin points out that the workforce dynamics are constantly changing, with a mix of human and machine collaboration altering each leader's role. Leaders must ensure that they have the right people focusing on the right things and leveraging the power of technology to do some, but not all of the work.

Daily Tech Digest - September 02, 2025


Quote for the day:

“The art of leadership is saying no, not yes. It is very easy to say yes.” -- Tony Blair


When Browsers Become the Attack Surface: Rethinking Security for Scattered Spider

Scattered Spider, also referred to as UNC3944, Octo Tempest, or Muddled Libra, has matured over the past two years through precision targeting of human identity and browser environments. This shift differentiates them from other notorious cybergangs like Lazarus Group, Fancy Bear, and REvil. If sensitive information such as your calendar, credentials, or security tokens is alive and well in browser tabs, Scattered Spider is able to acquire them. ... Once user credentials get into the wrong hands, attackers like Scattered Spider will move quickly to hijack previously authenticated sessions by stealing cookies and tokens. Securing the integrity of browser sessions can best be achieved by restricting unauthorized scripts from gaining access or exfiltrating these sensitive artifacts. Organizations must enforce contextual security policies based on components such as device posture, identity verification, and network trust. By linking session tokens to context, enterprises can prevent attacks like account takeovers, even after credentials have become compromised. ... Although browser security is the last mile of defense for malware-less attacks, integrating it into an existing security stack will fortify the entire network. By implementing activity logs enriched with browser data into SIEM, SOAR, and ITDR platforms, CISOs can correlate browser events with endpoint activity for a much fuller picture. 


The Transformation Resilience Trifecta: Agentic AI, Synthetic Data and Executive AI Literacy

The current state of Agentic AI is, in a word, fragile. Ask anyone in the trenches. These agents can be brilliant one minute and baffling the next. Instructions get misunderstood. Tasks break in new contexts. Chaining agents into even moderately complex workflows exposes just how early we are in this game. Reliability? Still a work in progress. And yet, we’re seeing companies experiment. Some are stitching together agents using LangChain or CrewAI. Others are waiting for more robust offerings from Microsoft Copilot Studio, OpenAI’s GPT-4o Agents, or Anthropic’s Claude toolsets. It’s the classic innovator’s dilemma: Move too early, and you waste time on immature tech. Move too late, and you miss the wave. Leaders must thread that needle — testing the waters while tempering expectations. ... Here’s the scarier scenario I’m seeing more often: “Shadow AI.” Employees are already using ChatGPT, Claude, Copilot, Perplexity — all under the radar. They’re using it to write reports, generate code snippets, answer emails, or brainstorm marketing copy. They’re more AI-savvy than their leadership. But they don’t talk about it. Why? Fear. Risk. Politics. Meanwhile, some executives are content to play cheerleader, mouthing AI platitudes on LinkedIn but never rolling up their sleeves. That’s not leadership — that’s theater.


Red Hat strives for simplicity in an ever more complex IT world

One of the most innovative developments in RHEL 10 is bootc in image mode, where VMs run like a container and are part of the CI/CD pipeline. By using immutable images, all changes are controlled from the development environment. Van der Breggen illustrates this with a retail scenario: “I can have one POS system for the payment kiosk, but I can also have another POS system for my cashiers. They use the same base image. If I then upgrade that base image to later releases of RHEL, I create one new base image, tag it in the environments, and then all 500 systems can be updated at once.” Red Hat Enterprise Linux Lightspeed acts as a command-line assistant that brings AI directly into the terminal. ... For edge devices, Red Hat uses a solution called Greenboot, which does not immediately proceed to a rollback but can wait for one if a certain condition are met. After, for example, three reboots without a working system, it reverts to the previous working release. However, not everything has been worked out perfectly yet. Lightspeed currently only works online, while many customers would like to use it offline because their RHEL systems are tucked away behind firewalls. Red Hat is still looking into possibilities for an expansion here, although making the knowledge base available offline poses risks to intellectual property. 


The state of DevOps and AI: Not just hype

The vision of AI that takes you from a list of requirements through work items to build to test to, finally, deployment is still nothing more than a vision. In many cases, DevOps tool vendors use AI to build solutions to the problems their customers have. The result is a mixture of point solutions that can solve immediate developer problems. ... Machine learning is speeding up testing by failing faster. Build steps get reordered automatically so those that are likely to fail happen earlier, which means developers aren’t waiting for the full build to know when they need to fix something. Often, the same system is used to detect flaky tests by muting tests where failure adds no value. ... Machine learning gradually helps identify the characteristics of a working system and can raise an alert when things go wrong. Depending on the governance, it can spot where a defect was introduced and start a production rollback while also providing potential remediation code to fix the defect. ... There’s a lot of puffery around AI, and DevOps vendors are not helping. A lot of their marketing emphasizes fear: “Your competitors are using AI, and if you’re not, you’re going to lose” is their message. Yet DevOps vendors themselves are only one or two steps ahead of you in their AI adoption journey. Don’t adopt AI pell-mell due to FOMO, and don’t expect to replace everyone under the CTO with a large language model.


5 Ways To Secure Your Industrial IoT Network

IIoT is a subcategory of the Internet of Things (IoT). It is made up of a system of interconnected smart devices that uses sensors, actuators, controllers and intelligent control systems to collect, transmit, receive and analyze data.... IIoT also has its unique architecture that begins with the device layer, where equipment, sensors, actuators and controllers collect raw operational data. That information is passed through the network layer, which transmits it to the internet via secure gateways. Next, the edge or fog computing layer processes and filters the data locally before sending it to the cloud, helping reduce latency and improving responsiveness. Once in the service and application support layer, the data is stored, analyzed, and used to generate alerts and insights. ... Many IIoT devices are not built with strong cybersecurity protections. This is especially true for legacy machines that were never designed to connect to modern networks. Without safeguards such as encryption or secure authentication, these devices can become easy targets. ... Defending against IIoT threats requires a layered approach that combines technology, processes and people. Manufacturers should segment their networks to limit the spread of attacks, apply strong encryption and authentication for connected devices, and keep software and firmware regularly updated.


AI Chatbots Are Emotionally Deceptive by Design

Even without deep connection, emotional attachment can lead users to place too much trust in the content chatbots provide. Extensive interaction with a social entity that is designed to be both relentlessly agreeable, and specifically personalized to a user’s tastes, can also lead to social “deskilling,” as some users of AI chatbots have flagged. This dynamic is simply unrealistic in genuine human relationships. Some users may be more vulnerable than others to this kind of emotional manipulation, like neurodiverse people or teens who have limited experience building relationships. ... With AI chatbots, though, deceptive practices are not hidden in user interface elements, but in their human-like conversational responses. It’s time to consider a different design paradigm, one that centers user protection: non-anthropomorphic conversational AI. All AI chatbots can be less anthropomorphic than they are, at least by default, without necessarily compromising function and benefit. A companion AI, for example, can provide emotional support without saying, “I also feel that way sometimes.” This non-anthropomorphic approach is already familiar in robot design, where researchers have created robots that are purposefully designed to not be human-like. This design choice is proven to more appropriately reflect system capabilities, and to better situate robots as useful tools, not friends or social counterparts.


How AI product teams are rethinking impact, risk, feasibility

We’re at a strange crossroads in the evolution of AI. Nearly every enterprise wants to harness it. Many are investing heavily. But most are falling flat. AI is everywhere — in strategy decks, boardroom buzzwords and headline-grabbing POCs. Yet, behind the curtain, something isn’t working. ... One of the most widely adopted prioritization models in product management is RICE — which scores initiatives based on Reach, Impact, Confidence, and Effort. It’s elegant. It’s simple. It’s also outdated. RICE was never designed for the world of foundation models, dynamic data pipelines or the unpredictability of inference-time reasoning. ... To make matters worse, there’s a growing mismatch between what enterprises want to automate and what AI can realistically handle. Stanford’s 2025 study, The Future of Work with AI Agents, provides a fascinating lens. ... ARISE adds three crucial layers that traditional frameworks miss: First, AI Desire — does solving this problem with AI add real value, or are we just forcing AI into something that doesn’t need it? Second, AI Capability — do we actually have the data, model maturity and engineering readiness to make this happen? And third, Intent — is the AI meant to act on its own or assist a human? Proactive systems have more upside, but they also come with far more risk. ARISE lets you reflect that in your prioritization.


Cloud control: The key to greener, leaner data centers

To fully unlock these cost benefits, businesses must adopt FinOps practices: the discipline of bringing engineering, finance, and operations together to optimize cloud spending. Without it, cloud costs can quickly spiral, especially in hybrid environments. But, with FinOps, organizations can forecast demand more accurately, optimise usage, and ensure every pound spent delivers value. ... Cloud platforms make it easier to use computing resources more efficiently. Even though the infrastructure stays online, hyperscalers can spread workloads across many customers, keeping their hardware busier and more productive. The advantage is that hyperscalers can distribute workloads across multiple customers and manage capacity at a large scale, allowing them to power down hardware when it's not in use. ... The combination of cloud computing and artificial intelligence (AI) is further reshaping data center operations. AI can analyse energy usage, detect inefficiencies, and recommend real-time adjustments. But running these models on-premises can be resource-intensive. Cloud-based AI services offer a more efficient alternative. Take Google, for instance. By applying AI to its data center cooling systems, it cut energy use by up to 40 percent. Other organizations can tap into similar tools via the cloud to monitor temperature, humidity, and workload patterns and automatically adjust cooling, load balancing, and power distribution.


You Backed Up Your Data, but Can You Bring It Back?

Many IT teams assume that the existence of backups guarantees successful restoration. This misconception can be costly. A recent report from Veeam revealed that 49% of companies failed to recover most of their servers after a significant incident. This highlights a painful reality: Most backup strategies focus too much on storage and not enough on service restoration. Having backup files is not the same as successfully restoring systems. In real-world recovery scenarios, teams face unknown dependencies, a lack of orchestration, incomplete documentation, and gaps between infrastructure and applications. When services need to be restored in a specific order and under intense pressure, any oversight can become a significant bottleneck. ... Relying on a single backup location creates a single point of failure. Local backups can be fast but are vulnerable to physical threats, hardware failures, or ransomware attacks. Cloud backups offer flexibility and off-site protection but may suffer bandwidth constraints, cost limitations, or provider outages. A hybrid backup strategy ensures multiple recovery paths by combining on-premises storage, cloud solutions, and optionally offline or air-gapped options. This approach allows teams to choose the fastest or most reliable method based on the nature of the disruption.


Beyond Prevention: How Cybersecurity and Cyber Insurance Are Converging to Transform Risk Management

Historically, cybersecurity and cyber insurance have operated in silos, with companies deploying technical defenses to fend off attacks while holding a cyber insurance policy as a safety net. This fragmented approach often leaves gaps in coverage and preparedness. ... The insurance sector is at a turning point. Traditional models that assess risk at the point of policy issuance are rapidly becoming outdated in the face of constantly evolving cyber threats. Insurers who fail to adapt to an integrated model risk being outpaced by agile Cyber Insurtech companies, which leverage cutting-edge cyber intelligence, machine learning, and risk analytics to offer adaptive coverage and continuous monitoring. Some insurers have already begun to reimagine their role—not only as claim processors but as active partners in risk prevention. ... A combined cybersecurity and insurance strategy goes beyond traditional risk management. It aligns the objectives of both the insurer and the insured, with insurers assuming a more proactive role in supporting risk mitigation. By reducing the probability of significant losses through continuous monitoring and risk-based incentives, insurers are building a more resilient client base, directly translating to reduced claim frequency and severity.

Daily Tech Digest - September 01, 2025


Quote for the day:

"Whenever you find yourself on the side of the majority, it is time to pause and reflect." -- Mark Twain


The AI-powered cyberattack era is here

In the deepfake era, the crime was unprecedented and exotic. In the genAI era, it’s a banality.
You just need a three-second recording of a person talking, according to McAfee experts. With that snippet, you can create convincing fake messages or phone calls. If someone sounds like a trusted person, people are ready to hand over their cash or secrets. In 2024, the company’s global study found one in four people had suffered or knew someone hit by an AI voice scam. ... One challenge in the field of AI-enabled attacks — which is to say, attacks that didn’t exist or weren’t possible before genAI — is how quickly everything changes. Take AI browsers, for example. This new category of web browser includes Perplexity Comet, Dia (by The Browser Company), Fellou, Opera Neon, Sigma AI Browser, Arc Max, Microsoft Edge Copilot, Brave Leo, Wave Browser Pro, SigmaOS, Opera Aria, Genspark AI Browser, Poly, Quetta Browser, Browserbase, Phew AI Tab, and the upcoming OpenAI browser. ... The truth is that most attacks are still the old-fashioned kind, performed without help from AI. And most still involve human error. So all the standard guidelines and best practices apply. Companies should update software regularly, require multifactor authentication for all logins, and give employees training about fake emails and malicious links. Outside experts should run penetration tests twice a year. Making regular offline backups can save thousands after AI-based ransomware attacks.


How to Make Data Work for What’s Next

Too often, companies begin by auditing the data they already have. A better question is, “What outcome are we trying to drive?” Whether it’s scaling operations, improving retention, or guiding smarter investments, the path forward starts with understanding where you want to go. ... Not everything needs to be measured. The goal is to curate the data, pulling in what’s most useful rather than everything that’s available. Focus on what’s going to help people make decisions in real time. Some metrics help you look ahead, while others explain what already happened. A good mix can be helpful, but only if it still aligns with the outcome you’re tracking. This shift can feel unfamiliar. Many teams are used to starting from their existing systems–what’s already tracked, what can be pulled from a dashboard–and working backward. But that often leads to noise or gaps. Managing too much data isn’t just overwhelming; it’s also costly. Teams spend time storing, maintaining, and cleaning data that often doesn’t lead to better decisions. ... Trust is built in small moments. When early reports reflect what people expect based on their lived experience, they begin to rely on the system. ... A stronger data culture isn’t just about systems. It’s about building skills and helping people see how their work connects to outcomes. When data reinforces what people already know and shows up in context—visually, interactively, and on time—it becomes a tool they trust, use, and want to leverage.


Cybercrime increasingly moving beyond financial gains

“We are very redundant when talking about cybercrime, because we always associate it with economic motivations,” says Hervé Lambert, global consumer operations manager at Panda Security. “But they are not the only reasons out there.” Lambert also refers to political and military cyber espionage, “states or actors linked to different governments” that seek to infiltrate to obtain strategic information. It also includes cyberwarfare, “attacks designed to do damage, disable, render important systems useless. There is no lucrative purpose, but to enhance or win a war or facilitate sabotage.” ... “These very different motivations are not mutually exclusive, as they seek different objectives,” adds Alonso García. “We can find them as the sole motivation or they complement each other, making cyberattacks more elaborate and complex to analyze.” In other words, a person or group may have political interests but ask for a ransom to cover up their actions or seek funding; or in a context of turmoil between countries, take advantage to launch attacks that seek to profit. ... But the strategy to be followed will have to be reoriented or reinforced if, for example, we are working in a critical sector from a geopolitical point of view, in which, among other things, disinformation will have to be taken into account. 
"The old software world is gone, giving way to a new set of truths being defined by AI. To navigate the changes, technical leaders should carry out rigorous validation on AI assistants. Managers should establish formal AI governance policies and invest in training for emerging technologies. Security professionals should update their threat models to include AI-specific risks and leverage SBOMs [Software Bill of Materials] as a strategic asset for risk management to achieve true scale application security." ... "Without SBOMs, we're flying blind. With them, we're finally turning the lights on in the supply chain cockpit," said Helen Oakley, Director of Secure Software Supply Chains and Secure Development at SAP. "AI coding assistants are like interns with rocket fuel. They accelerate everything, including errors, if you don't set boundaries." ... "For organizations adopting third-party AI tools, it's also critical to recognize that this introduces a shared security responsibility model—much like what we’ve seen with cloud adoption. When visibility into vendor infrastructure, data handling, or model behavior is limited, organizations must proactively mitigate those risks. That includes putting robust guardrails in place, defining access boundaries, and applying security controls that account for external dependencies."


How Digital Twins Transform Drug Development Processes

A key technological advancement emerging from these hubs is the application of digital twins in pharmaceutical research. Initially used in engineering and manufacturing sectors, digital twins in the pharmaceutical industry are virtual models of human systems that replicate biological processes. These replicas are built using vast volumes of biological, clinical and genomic data, enabling researchers to test how different patient profiles might respond to specific drugs without exposing individuals to experimental therapies. The implications of this approach are transformative. Through digital twins, pharmaceutical scientists can simulate the progression of diseases, predict Adverse Drug Reactions (ADRs) and model patient diversity across age, gender, genetic traits and comorbidities. This ability to run in-silico trials, which are clinical trials conducted through virtual simulations, reduces the cost, duration and risk associated with traditional clinical testing. ... AI is transforming every clinical development phase worldwide, from trial design to execution and outcome analysis. According to industry estimates, AI is expected to support 60–70 per cent of clinical trials by 2030, potentially saving $20–30 billion annually. While digital twins represent just one facet of this broader AI integration, their capacity to virtually assess drug safety and efficacy could significantly accelerate the journey from discovery to patient delivery.


Break the Link: The Fastest Way to Contain a Cyber-Attack

Hardware-enforced network isolation gives operators the ability to physically disconnect servers, storage and network segments on demand, using secure, out-of-band commands that sit entirely outside the attack surface. The simplicity here is the ultimate strength: if malware can’t make contact, it can’t cause damage. If a breach does happen? You can trigger isolation in milliseconds, stopping the spread before it locks systems, exfiltrates data or drains accounts. Unlike software-only isolation, which depends on the very systems it’s defending, hardware isolation can’t be tampered with remotely. No IP address, no exploitable code, just a clean physical break. ... Hardware isolation cuts the response to milliseconds, preserving both data integrity and regulatory compliance. It stops an incident at the source, shutting it down before operations are disrupted. The power of isolation is especially effective in high-stakes environments where speed and certainty matter. In colocation facilities, automated isolation prevents cross-tenant contamination by cutting off a compromised tenant before the threat can spread. At disaster recovery sites, it enables network segments to remain fully offline until they are needed, improving security and efficiency. In AI-heavy workloads, hardware isolation prevents model tampering and data exfiltration. In backup environments, selective disconnection ensures ransomware cannot encrypt or corrupt critical archives.


Prioritize these 4 processes to balance innovation and responsibility in banking model risk management

As AI/ML capabilities often require specialized software, datasets and computational tools, many financial institution—especially smaller ones—turn to third-party vendors. While this can accelerate adoption, it also introduces critical vulnerabilities related to oversight, accountability and systemic dependence. Third-party models often come with limited visibility into how they were developed, what data was used and how they behave under stress. Smaller institutions may lack the bargaining power or technical resources to demand transparency or perform deep due diligence. This lack of insight can delay detection of errors, increase compliance risk and even result in operational disruptions. ... AI/ML models thrive on vast datasets. In banking, where customer data is highly sensitive and tightly regulated, this presents a critical dual-risk challenge: Protecting privacy and preventing/detecting hidden learning where AI models may inadvertently infer protected/sensitive attributes. One risk is unauthorized or improper use of personal data during model training. Unintended inclusion of restricted data sets can lead to privacy breaches and violations of data protection laws such as the General Data Protection Regulation (GDPR). Another, more subtle, risk is the inadvertent encoding of sensitive attributes such as race or gender through proxy variables, even when such data is not explicitly used.


Achieving a Secure Cloud with Restructured NHIs

At its core, NHIs restructuring involves the process of redefining and aligning the various identities and linked secrets within your organization’s cloud infrastructure. The aim is to have a more effective, efficient, and secure system capable of monitoring and governing NHIs. This restructuring process includes a comprehensive review of the existing NHIs, secrets, and their permissions. It also involves determining which secrets are associated with which NHIs, who owns them, how they are used, and which vulnerabilities they may be exposed to. By performing this activity, a strong foundation can be laid for establishing a secure cloud environment that harnesses the power of NHIs management. ... Why is the restructuring of NHIs not just a requirement but a strategic move for any digital enterprise? The answer lies in the potential weaknesses and vulnerabilities that can arise from poorly managed NHIs. Restructuring NHIs is not merely about enhancing cybersecurity but developing a strategic advantage. This requires realizing the significance of NHIs in providing a compelling line of defense against potential security breaches. By properly managing and restructuring NHIs, organizations can build comprehensive, effective, and potent cyber defenses. It enables them to anticipate potential threats, detect vulnerabilities, and implement proactive measures to mitigate risks.


Boards are being told to rethink their role in cybersecurity

The report describes how ransomware attacks have become more targeted and disruptive. Threat actors are no longer just encrypting files. They are exploiting identity systems, help desks, and cloud infrastructure. One example highlighted is the growing use of social engineering against help desk staff, where attackers impersonate employees and convince support teams to reset credentials or modify multifactor authentication settings. By doing so, they bypass technical defenses and gain control of accounts. The report emphasizes that boards should pay attention to how identity is protected inside their organizations. Security teams may face resistance when trying to roll out stronger protections such as phishing-resistant multifactor authentication. Boards, according to the report, are in a position to set the tone and ensure these measures are adopted. ... The third area of focus is how boards can support innovation while ensuring cybersecurity is not left behind. The report argues that strong cybersecurity practices can help a company stand out by building trust with customers and enabling faster adoption of new technology. Boards are urged to encourage a risk-first mindset when new products or services are developed. That means security should be considered early in the process rather than added later. 


How to Overcome Five Key GenAI Deployment Challenges

Data is the lifeblood of artificial intelligence. Fortunately, with generative AI, data does not have to be perfect and pristine compared to the requirements for traditional, transaction-based deterministic systems. The key is ensuring AI has sufficient context from your business environment to deliver meaningful outputs – not perfect data, but the right data that’s relevant to the target use case. Don’t make the mistake of making data preparation too complex. Focus on giving AI systems the key information they need to create reliable and meaningful results. Partners can find your most important data. They help build a practical data base that balances quality and access. They also guide you to add more data as the project grows. ... AI initiatives are often rife with the most technical challenges when they’re just being launched. From model updates to data inconsistencies, a reliable partner ensures smooth deployment by anticipating and addressing these hurdles. Once these projects have gotten off the ground, they actively monitor performance while troubleshooting issues like AI models drifting or mitigating data security and regulatory compliance challenges to keep the project on track. ... It’s not just technical issues that make GenAI hard. There’s also a human challenge. AI adoption requires buy-in among both business and IT leaders and support from actual end users.