Showing posts with label digital twin. Show all posts
Showing posts with label digital twin. Show all posts

Daily Tech Digest - September 01, 2025


Quote for the day:

"Whenever you find yourself on the side of the majority, it is time to pause and reflect." -- Mark Twain


The AI-powered cyberattack era is here

In the deepfake era, the crime was unprecedented and exotic. In the genAI era, it’s a banality.
You just need a three-second recording of a person talking, according to McAfee experts. With that snippet, you can create convincing fake messages or phone calls. If someone sounds like a trusted person, people are ready to hand over their cash or secrets. In 2024, the company’s global study found one in four people had suffered or knew someone hit by an AI voice scam. ... One challenge in the field of AI-enabled attacks — which is to say, attacks that didn’t exist or weren’t possible before genAI — is how quickly everything changes. Take AI browsers, for example. This new category of web browser includes Perplexity Comet, Dia (by The Browser Company), Fellou, Opera Neon, Sigma AI Browser, Arc Max, Microsoft Edge Copilot, Brave Leo, Wave Browser Pro, SigmaOS, Opera Aria, Genspark AI Browser, Poly, Quetta Browser, Browserbase, Phew AI Tab, and the upcoming OpenAI browser. ... The truth is that most attacks are still the old-fashioned kind, performed without help from AI. And most still involve human error. So all the standard guidelines and best practices apply. Companies should update software regularly, require multifactor authentication for all logins, and give employees training about fake emails and malicious links. Outside experts should run penetration tests twice a year. Making regular offline backups can save thousands after AI-based ransomware attacks.


How to Make Data Work for What’s Next

Too often, companies begin by auditing the data they already have. A better question is, “What outcome are we trying to drive?” Whether it’s scaling operations, improving retention, or guiding smarter investments, the path forward starts with understanding where you want to go. ... Not everything needs to be measured. The goal is to curate the data, pulling in what’s most useful rather than everything that’s available. Focus on what’s going to help people make decisions in real time. Some metrics help you look ahead, while others explain what already happened. A good mix can be helpful, but only if it still aligns with the outcome you’re tracking. This shift can feel unfamiliar. Many teams are used to starting from their existing systems–what’s already tracked, what can be pulled from a dashboard–and working backward. But that often leads to noise or gaps. Managing too much data isn’t just overwhelming; it’s also costly. Teams spend time storing, maintaining, and cleaning data that often doesn’t lead to better decisions. ... Trust is built in small moments. When early reports reflect what people expect based on their lived experience, they begin to rely on the system. ... A stronger data culture isn’t just about systems. It’s about building skills and helping people see how their work connects to outcomes. When data reinforces what people already know and shows up in context—visually, interactively, and on time—it becomes a tool they trust, use, and want to leverage.


Cybercrime increasingly moving beyond financial gains

“We are very redundant when talking about cybercrime, because we always associate it with economic motivations,” says Hervé Lambert, global consumer operations manager at Panda Security. “But they are not the only reasons out there.” Lambert also refers to political and military cyber espionage, “states or actors linked to different governments” that seek to infiltrate to obtain strategic information. It also includes cyberwarfare, “attacks designed to do damage, disable, render important systems useless. There is no lucrative purpose, but to enhance or win a war or facilitate sabotage.” ... “These very different motivations are not mutually exclusive, as they seek different objectives,” adds Alonso García. “We can find them as the sole motivation or they complement each other, making cyberattacks more elaborate and complex to analyze.” In other words, a person or group may have political interests but ask for a ransom to cover up their actions or seek funding; or in a context of turmoil between countries, take advantage to launch attacks that seek to profit. ... But the strategy to be followed will have to be reoriented or reinforced if, for example, we are working in a critical sector from a geopolitical point of view, in which, among other things, disinformation will have to be taken into account. 
"The old software world is gone, giving way to a new set of truths being defined by AI. To navigate the changes, technical leaders should carry out rigorous validation on AI assistants. Managers should establish formal AI governance policies and invest in training for emerging technologies. Security professionals should update their threat models to include AI-specific risks and leverage SBOMs [Software Bill of Materials] as a strategic asset for risk management to achieve true scale application security." ... "Without SBOMs, we're flying blind. With them, we're finally turning the lights on in the supply chain cockpit," said Helen Oakley, Director of Secure Software Supply Chains and Secure Development at SAP. "AI coding assistants are like interns with rocket fuel. They accelerate everything, including errors, if you don't set boundaries." ... "For organizations adopting third-party AI tools, it's also critical to recognize that this introduces a shared security responsibility model—much like what we’ve seen with cloud adoption. When visibility into vendor infrastructure, data handling, or model behavior is limited, organizations must proactively mitigate those risks. That includes putting robust guardrails in place, defining access boundaries, and applying security controls that account for external dependencies."


How Digital Twins Transform Drug Development Processes

A key technological advancement emerging from these hubs is the application of digital twins in pharmaceutical research. Initially used in engineering and manufacturing sectors, digital twins in the pharmaceutical industry are virtual models of human systems that replicate biological processes. These replicas are built using vast volumes of biological, clinical and genomic data, enabling researchers to test how different patient profiles might respond to specific drugs without exposing individuals to experimental therapies. The implications of this approach are transformative. Through digital twins, pharmaceutical scientists can simulate the progression of diseases, predict Adverse Drug Reactions (ADRs) and model patient diversity across age, gender, genetic traits and comorbidities. This ability to run in-silico trials, which are clinical trials conducted through virtual simulations, reduces the cost, duration and risk associated with traditional clinical testing. ... AI is transforming every clinical development phase worldwide, from trial design to execution and outcome analysis. According to industry estimates, AI is expected to support 60–70 per cent of clinical trials by 2030, potentially saving $20–30 billion annually. While digital twins represent just one facet of this broader AI integration, their capacity to virtually assess drug safety and efficacy could significantly accelerate the journey from discovery to patient delivery.


Break the Link: The Fastest Way to Contain a Cyber-Attack

Hardware-enforced network isolation gives operators the ability to physically disconnect servers, storage and network segments on demand, using secure, out-of-band commands that sit entirely outside the attack surface. The simplicity here is the ultimate strength: if malware can’t make contact, it can’t cause damage. If a breach does happen? You can trigger isolation in milliseconds, stopping the spread before it locks systems, exfiltrates data or drains accounts. Unlike software-only isolation, which depends on the very systems it’s defending, hardware isolation can’t be tampered with remotely. No IP address, no exploitable code, just a clean physical break. ... Hardware isolation cuts the response to milliseconds, preserving both data integrity and regulatory compliance. It stops an incident at the source, shutting it down before operations are disrupted. The power of isolation is especially effective in high-stakes environments where speed and certainty matter. In colocation facilities, automated isolation prevents cross-tenant contamination by cutting off a compromised tenant before the threat can spread. At disaster recovery sites, it enables network segments to remain fully offline until they are needed, improving security and efficiency. In AI-heavy workloads, hardware isolation prevents model tampering and data exfiltration. In backup environments, selective disconnection ensures ransomware cannot encrypt or corrupt critical archives.


Prioritize these 4 processes to balance innovation and responsibility in banking model risk management

As AI/ML capabilities often require specialized software, datasets and computational tools, many financial institution—especially smaller ones—turn to third-party vendors. While this can accelerate adoption, it also introduces critical vulnerabilities related to oversight, accountability and systemic dependence. Third-party models often come with limited visibility into how they were developed, what data was used and how they behave under stress. Smaller institutions may lack the bargaining power or technical resources to demand transparency or perform deep due diligence. This lack of insight can delay detection of errors, increase compliance risk and even result in operational disruptions. ... AI/ML models thrive on vast datasets. In banking, where customer data is highly sensitive and tightly regulated, this presents a critical dual-risk challenge: Protecting privacy and preventing/detecting hidden learning where AI models may inadvertently infer protected/sensitive attributes. One risk is unauthorized or improper use of personal data during model training. Unintended inclusion of restricted data sets can lead to privacy breaches and violations of data protection laws such as the General Data Protection Regulation (GDPR). Another, more subtle, risk is the inadvertent encoding of sensitive attributes such as race or gender through proxy variables, even when such data is not explicitly used.


Achieving a Secure Cloud with Restructured NHIs

At its core, NHIs restructuring involves the process of redefining and aligning the various identities and linked secrets within your organization’s cloud infrastructure. The aim is to have a more effective, efficient, and secure system capable of monitoring and governing NHIs. This restructuring process includes a comprehensive review of the existing NHIs, secrets, and their permissions. It also involves determining which secrets are associated with which NHIs, who owns them, how they are used, and which vulnerabilities they may be exposed to. By performing this activity, a strong foundation can be laid for establishing a secure cloud environment that harnesses the power of NHIs management. ... Why is the restructuring of NHIs not just a requirement but a strategic move for any digital enterprise? The answer lies in the potential weaknesses and vulnerabilities that can arise from poorly managed NHIs. Restructuring NHIs is not merely about enhancing cybersecurity but developing a strategic advantage. This requires realizing the significance of NHIs in providing a compelling line of defense against potential security breaches. By properly managing and restructuring NHIs, organizations can build comprehensive, effective, and potent cyber defenses. It enables them to anticipate potential threats, detect vulnerabilities, and implement proactive measures to mitigate risks.


Boards are being told to rethink their role in cybersecurity

The report describes how ransomware attacks have become more targeted and disruptive. Threat actors are no longer just encrypting files. They are exploiting identity systems, help desks, and cloud infrastructure. One example highlighted is the growing use of social engineering against help desk staff, where attackers impersonate employees and convince support teams to reset credentials or modify multifactor authentication settings. By doing so, they bypass technical defenses and gain control of accounts. The report emphasizes that boards should pay attention to how identity is protected inside their organizations. Security teams may face resistance when trying to roll out stronger protections such as phishing-resistant multifactor authentication. Boards, according to the report, are in a position to set the tone and ensure these measures are adopted. ... The third area of focus is how boards can support innovation while ensuring cybersecurity is not left behind. The report argues that strong cybersecurity practices can help a company stand out by building trust with customers and enabling faster adoption of new technology. Boards are urged to encourage a risk-first mindset when new products or services are developed. That means security should be considered early in the process rather than added later. 


How to Overcome Five Key GenAI Deployment Challenges

Data is the lifeblood of artificial intelligence. Fortunately, with generative AI, data does not have to be perfect and pristine compared to the requirements for traditional, transaction-based deterministic systems. The key is ensuring AI has sufficient context from your business environment to deliver meaningful outputs – not perfect data, but the right data that’s relevant to the target use case. Don’t make the mistake of making data preparation too complex. Focus on giving AI systems the key information they need to create reliable and meaningful results. Partners can find your most important data. They help build a practical data base that balances quality and access. They also guide you to add more data as the project grows. ... AI initiatives are often rife with the most technical challenges when they’re just being launched. From model updates to data inconsistencies, a reliable partner ensures smooth deployment by anticipating and addressing these hurdles. Once these projects have gotten off the ground, they actively monitor performance while troubleshooting issues like AI models drifting or mitigating data security and regulatory compliance challenges to keep the project on track. ... It’s not just technical issues that make GenAI hard. There’s also a human challenge. AI adoption requires buy-in among both business and IT leaders and support from actual end users. 

Daily Tech Digest - August 24, 2025


Quote for the day:

"To accomplish great things, we must not only act, but also dream, not only plan, but also believe." -- Anatole France



Creating the ‘AI native’ generation: The role of digital skills in education

Boosting AI skills has the potential to drive economic growth and productivity and create jobs, but ambition must be matched with effective delivery. We must ensure AI is integrated into education in a way that encourages students to maintain critical thinking skills, skeptically assess AI outputs, and use it responsibly and ethically. Education should also inspire future tech talent and prepare them for the workplace. ... AI fluency is only one part of the picture. Amid a global skills gap, we also need to capture the imaginations of young people to work in tech. To achieve this, AI and technology education must be accessible, meaningful, and aspirational. That requires coordinated action from schools, industry, and government to promote the real-world impact of digital skills and create clearer, more inspiring pathways into tech careers and expose students how AI is applied in various professions. Early exposure to AI can do far more than build fluency, it can spark curiosity, confidence and career ambition towards high-value sectors like data science, engineering and cybersecurity—areas where the UK must lead. ... Students who learn how to use AI now will build the competencies that industries want and need for years to come. But this will form the first stage of a broader AI learning arc where learning and upskilling become a lifelong mindset, not a single milestone. 


What is the State of SIEM?

In addition to high deployment costs, many organizations grapple with implementing SIEM. A primary challenge is SIEM configuration -- given that the average organization has more than 100 different data sources that must plug into the platform, according to an IDC report. It can be daunting for network staff to do the following when deploying SIEM: Choose which data sources to integrate; Set up SIEM correlation rules that define what will be classified as a security event; and Determine the alert thresholds for specific data and activities. It's equally challenging to manage the information and alerts a SIEM platform issues. If you fine-tune too much, the result might be false positives as the system triggers alarms about events that aren't actually threats. This is a time-stealer for network techs and can lead to staff fatigue and frustration. In contrast, if the calibration is too liberal, organizations run the risk of overlooking something that could be vital. Network staff must also coordinate with other areas of IT and the company. For example, what if data safekeeping and compliance regulations change? Does this change SIEM rule sets? What if the IT applications group rolls out new systems that must be attached to SIEM? Can the legal department or auditors tell you how long to store and retain data for eDiscovery or for disaster backup and recovery? And which data noise can you discard as waste?


AI Data Centers: A Popular Term That’s Hard to Define

The tricky thing about trying to define AI data centers based on characteristics like those described above is that none of those features is unique to AI data centers. For example, hyperscale data centers – meaning very large facilities capable of accommodating more than a hundred thousand servers in some cases – existed before modern AI debuted. AI has made large-scale data centers more important because AI workloads require vast infrastructures, but it’s not as if no one was building large data centers before AI rose to prominence. Likewise, it has long been possible to deploy GPU-equipped servers in data centers. ... Likewise, advanced cooling systems and innovative approaches to data center power management are not unique to the age of generative AI. They, too, predated AI data centers. ... Arguably, an AI data center is ultimately defined by what it does (hosting AI workloads) more than by how it does it. So, before getting hung up on the idea that AI requires investment in a new generation of data centers, it’s perhaps healthier to think about how to leverage the data centers already in existence to support AI workloads. That perspective will help the industry avoid the risk of overinvesting in new data centers designed specifically for AI – and as a bonus, it may save money by allowing businesses to repurpose the data centers they already own to meet their AI needs as well.


Password Managers Vulnerable to Data Theft via Clickjacking

Tóth showed how an attacker can use DOM-based extension clickjacking and the autofill functionality of password managers to exfiltrate sensitive data stored by these applications, including personal data, usernames and passwords, passkeys, and payment card information. The attacks demonstrated by the researcher require 0-5 clicks from the victim, with a majority requiring only one click on a harmless-looking element on the page. The single-click attacks often involved exploitation of XSS or other vulnerabilities. DOM, or Document Object Model, is an object tree created by the browser when it loads an HTML or XML web page. ... Tóth’s attack involves a malicious script that manipulates user interface elements injected by browser extensions into the DOM. “The principle is that a browser extension injects elements into the DOM, which an attacker can then make invisible using JavaScript,” he explained. According to the researcher, some of the vendors have patched the vulnerabilities, but fixes have not been released for Bitwarden, 1Password, iCloud Passwords, Enpass, LastPass, and LogMeOnce. SecurityWeek has reached out to these companies for comment. Bitwarden said a fix for the vulnerability is being rolled out this week with version 2025.8.0. LogMeOnce said it’s aware of the findings and its team is actively working on resolving the issue through a security update.


Iskraemeco India CEO: ERP, AI, and the future of utility leadership

We see a clear convergence ahead, where ERP systems like Infor’s will increasingly integrate with edge AI, embedded IoT, and low-code automation to create intelligent, responsive operations. This is especially relevant in utility scenarios where time-sensitive data must drive immediate action. For instance, our smart kits – equipped with sensor technology – are being designed to detect outages in real time and pinpoint exact failure points, such as which pole needs service during a natural disaster. This type of capability, powered by embedded IoT and edge computing, enables decisions to be made closer to the source, reducing downtime and response lag.  ... One of the most important lessons we've learned is that success in complex ERP deployments is less about customisation and more about alignment, across leadership, teams, and technology. In our case, resisting the urge to modify the system and instead adopting Infor’s best-practice frameworks was key. It allowed us to stay focused, move faster, and ensure long-term stability across all modules. In a multi-stakeholder environment – where regulatory bodies, internal departments, and technology partners are all involved – clarity of direction from leadership made all the difference. When the expectation is clear that we align to the system, and not the other way around, it simplifies everything from compliance to team onboarding.


Experts Concerned by Signs of AI Bubble

"There's a huge boom in AI — some people are scrambling to get exposure at any cost, while others are sounding the alarm that this will end in tears," Kai Wu, founder and chief investment officer of Sparkline Capital, told the Wall Street Journal last year. There are even doubters inside the industry. In July, recently ousted CEO of AI company Stability AI Emad Mostaque told banking analysts that "I think this will be the biggest bubble of all time." "I call it the 'dot AI’ bubble, and it hasn’t even started yet," he added at the time. Just last week, Jeffrey Gundlach, billionaire CEO of DoubleLine Capital, also compared the AI craze to the dot com bubble. "This feels a lot like 1999," he said during an X Spaces broadcast last week, as quoted by Business Insider. "My impression is that investors are presently enjoying the double-top of the most extreme speculative bubble in US financial history," Hussman Investment Trust president John Hussman wrote in a research note. In short, with so many people ringing the alarm bells, there could well be cause for concern. And the consequences of an AI bubble bursting could be devastating. ... While Nvidia would survive such a debacle, the "ones that are likely to bear the brunt of the correction are the providers of generative AI services who are raising money on the promise of selling their services for $20/user/month," he argued.


OpenCUA’s open source computer-use agents rival proprietary models from OpenAI and Anthropic

Computer-use agents are designed to autonomously complete tasks on a computer, from navigating websites to operating complex software. They can also help automate workflows in the enterprise. However, the most capable CUA systems are proprietary, with critical details about their training data, architectures, and development processes kept private. “As the lack of transparency limits technical advancements and raises safety concerns, the research community needs truly open CUA frameworks to study their capabilities, limitations, and risks,” the researchers state in their paper. ... The tool streamlines data collection by running in the background on an annotator’s personal computer, capturing screen videos, mouse and keyboard inputs, and the underlying accessibility tree, which provides structured information about on-screen elements.  ... The key insight was to augment these trajectories with chain-of-thought (CoT) reasoning. This process generates a detailed “inner monologue” for each action, which includes planning, memory, and reflection. This structured reasoning is organized into three levels: a high-level observation of the screen, reflective thoughts that analyze the situation and plan the next steps, and finally, the concise, executable action. This approach helps the agent develop a deeper understanding of the tasks.


How to remember everything

MyMind is a clutter-free bookmarking and knowledge-capture app without folders or manual content organization.There are no templates, manual customizations, or collaboration tools. Instead, MyMind recognizes and formats the content type elegantly. For example, songs, movies, books, and recipes are displayed differently based on MyMind’s detection, regardless of the source, as are pictures and videos. MyMind uses AI to auto-tag everything and allows custom tags. Every word, including those in pictures, is indexed. You can take pictures of information, upload them to MyMind, and find them later by searching a word or two found in the picture. Copying a sentence or paragraph from an article will display the quote with a source link. Every data chunk is captured in a “card.” ... Alongside AI-enabled lifelogging tools like MyMind, we’re also entering an era of lifelogging hardware devices. One promising direction comes from a startup called Brilliant Labs. Its new $299 Halo glasses, available for pre-order and shipping in November, are lightweight AI glasses. The glasses have a long list of features — bone conduction sound, a camera, light weight, etc. — but the lifelogging enabler is an “agentic memory” system called Narrative. It captures information automatically from the camera and microphones and places it into a personal knowledge base. 


From APIs to Digital Twins: Warehouse Integration Strategies for Smarter Supply Chains

Digital twins create virtual replicas of warehouses and supply chains for monitoring and testing. A digital twin ingests live data from IoT sensors, machines, and transportation feeds to simulate how changes affect outcomes. For instance, GE’s “Digital Wind Farm” project feeds sensor data from each turbine into a cloud model, suggesting performance tweaks that boost energy output by ~20% (worth ~$100M more revenue per turbine). In warehousing, digital twins can model workflows (layout changes, staffing shifts, equipment usage) to identify bottlenecks or test improvements before physical changes. Paired with AI, these twins become predictive and prescriptive: companies can run thousands of what-if scenarios (like a port strike or demand surge) and adjust plans accordingly. ... Today’s warehouses are not just storage sheds; they are smart, interconnected nodes in the supply chain. Leveraging IIoT sensors, cloud APIs, AI analytics, robotics, and digital twins transforms logistics into a competitive advantage. Integrated systems reduce manual handoffs and errors: for example, automated picking and instant carrier booking can shorten fulfillment cycles from days to hours. Industry data bear this out, deploying these technologies can improve on-time delivery by ~20% and significantly lower operating costs.


Enterprise Software Spending Surges Despite AI ROI Shortfalls

AI capabilities increasingly drive software purchasing decisions. However, many organizations struggle with the gap between AI promise and practical ROI delivery. The disconnect stems from fundamental challenges in data accessibility and contextual understanding. Current AI implementations face significant obstacles in accessing the full spectrum of contextual data required for complex decision-making. "In complex use cases, where the exponential benefits of AI reside, AI still feels forced and contrived when it doesn't have the same amount and depth of contextual data required to read a situation," Kirkpatrick explained. Effective AI implementation requires comprehensive data infrastructure investments. Organizations must ensure AI models can access approved data sources while maintaining proper guardrails. Many IT departments are still working to achieve this balance. The challenge intensifies in environments where AI needs to integrate across multiple platforms and data sources. Well-trained humans often outperform AI on complex tasks because their experience allows them to read multiple factors and adjust contextually. "For AI to mimic that experience, it requires a wide range of data that can address factors across a wide range of dimensions," Kirkpatrick said. "That requires significant investment in data to ensure the AI has the information it needs at the right time, with the proper context, to function seamlessly, effectively, and efficiently."

Daily Tech Digest - August 10, 2025


Quote for the day:

"Don't worry about being successful but work toward being significant and the success will naturally follow." -- Oprah Winfrey


The Scrum Master: A True Leader Who Serves

Many people online claim that “Agile is a mindset”, and that the mindset is more important than the framework. But let us be honest, the term “agile mindset” is very abstract. How do we know someone truly has it? We cannot open their brain to check. Mindset manifests in different behaviour depending on culture and context. In one place, “commitment” might mean fixed scope and fixed time. In another, it might mean working long hours. In yet another, it could mean delivering excellence within reasonable hours. Because of this complexity, simply saying “agile is a mindset” is not enough. What works better is modelling the behaviour. When people consistently observe the Scrum Master demonstrating agility, those behaviours can become habits. ... Some Scrum Masters and agile coaches believe their job is to coach exclusively, asking questions without ever offering answers. While coaching is valuable, relying on it alone can be harmful if it is not relevant or contextual. Relevance is key to improving team effectiveness. At times, the Scrum Master needs to get their hands dirty. If a team has struggled with manual regression testing for twenty Sprints, do not just tell them to adopt Test-Driven Development (TDD). Show them. ... To be a true leader, the Scrum Master must be humble and authentic. You cannot fake true leadership. It requires internal transformation, a shift in character. As the saying goes, “Character is who we are when no one is watching.”


Vendors Align IAM, IGA and PAM for Identity Convergence

The historic separation of IGA, PAM and IAM created inefficiencies and security blind spots, and attackers exploited inconsistencies in policy enforcement across layers, said Gil Rapaport, chief solutions officer at CyberArk. By combining governance, access and privilege in a single platform, the company could close the gaps between policy enforcement and detection, Rapaport said. "We noticed those siloed markets creating inefficiency in really protecting those identities, because you need to manage different type of policies for governance of those identities and for securing the identities and for the authentication of those identities, and so on," Rapaport told ISMG. "The cracks between those silos - this is exactly where the new attack factors started to develop." ... Enterprise customers that rely on different tools for IGA, PAM, IAM, cloud entitlements and data governance are increasingly frustrated because integrating those tools is time-consuming and error-prone, Mudra said. Converged platforms reduce integration overhead and allow vendors to build tools that communicate natively and share risk signals, he said. "If you have these tools in silos, yes, they can all do different things, but you have to integrate them after the fact versus a converged platform comes with out-of-the-box integration," Mudra said. "So, these different tools can share context and signals out of the box."


The Importance of Technology Due Diligence in Mergers and Acquisitions

The primary reason for conducting technology due diligence is to uncover any potential risks that could derail the deal or disrupt operations post-acquisition. This includes identifying outdated software, unresolved security vulnerabilities, and the potential for data breaches. By spotting these risks early, you can make informed decisions and create risk mitigation strategies to protect your company. ... A key part of technology due diligence is making sure that the target company’s technology assets align with your business’s strategic goals. Whether it’s cloud infrastructure, software solutions, or hardware, the technology should complement your existing operations and provide a foundation for long-term growth. Misalignment in technology can lead to inefficiencies and costly reworks. ... Rank the identified risks based on their potential impact on your business and the likelihood of their occurrence. This will help prioritize mitigation efforts, so that you’re addressing the most critical vulnerabilities first. Consider both short-term risks, like pending software patches, and long-term issues, such as outdated technology or a lack of scalability. ... Review existing vendor contracts and third-party service provider agreements, looking for any liabilities or compliance risks that may emerge post-acquisition—especially those related to data access, privacy regulations, or long-term commitments. It’s also important to assess the cybersecurity posture of vendors and their ability to support integration.


From terabytes to insights: Real-world AI obervability architecture

The challenge is not only the data volume, but the data fragmentation. According to New Relic’s 2023 Observability Forecast Report, 50% of organizations report siloed telemetry data, with only 33% achieving a unified view across metrics, logs and traces. Logs tell one part of the story, metrics another, traces yet another. Without a consistent thread of context, engineers are forced into manual correlation, relying on intuition, tribal knowledge and tedious detective work during incidents. ... In the first layer, we develop the contextual telemetry data by embedding standardized metadata in the telemetry signals, such as distributed traces, logs and metrics. Then, in the second layer, enriched data is fed into the MCP server to index, add structure and provide client access to context-enriched data using APIs. Finally, the AI-driven analysis engine utilizes the structured and enriched telemetry data for anomaly detection, correlation and root-cause analysis to troubleshoot application issues. This layered design ensures that AI and engineering teams receive context-driven, actionable insights from telemetry data. ... The amalgamation of structured data pipelines and AI holds enormous promise for observability. We can transform vast telemetry data into actionable insights by leveraging structured protocols such as MCP and AI-driven analyses, resulting in proactive rather than reactive systems. 


MCP explained: The AI gamechanger

Instead of relying on scattered prompts, developers can now define and deliver context dynamically, making integrations faster, more accurate, and easier to maintain. By decoupling context from prompts and managing it like any other component, developers can, in effect, build their own personal, multi-layered prompt interface. This transforms AI from a black box into an integrated part of your tech stack. ... MCP is important because it extends this principle to AI by treating context as a modular, API-driven component that can be integrated wherever needed. Similar to microservices or headless frontends, this approach allows AI functionality to be composed and embedded flexibly across various layers of the tech stack without creating tight dependencies. The result is greater flexibility, enhanced reusability, faster iteration in distributed systems and true scalability. ... As with any exciting disruption, the opportunity offered by MCP comes with its own set of challenges. Chief among them is poorly defined context. One of the most common mistakes is hardcoding static values — instead, context should be dynamic and reflect real-time system states. Overloading the model with too much, too little or irrelevant data is another pitfall, often leading to degraded performance and unpredictable outputs. 


AI is fueling a power surge - it could also reinvent the grid

Data centers themselves are beginning to evolve as well. Some forward-looking facilities are now being designed with built-in flexibility to contribute back to the grid or operate independently during times of peak stress. These new models, combined with improved efficiency standards and smarter site selection strategies, have the potential to ease some of the pressure being placed on energy systems. Equally important is the role of cross-sector collaboration. As the line between tech and infrastructure continues to blur, it’s critical that policymakers, engineers, utilities, and technology providers work together to shape the standards and policies that will govern this transition. That means not only building new systems, but also rethinking regulatory frameworks and investment strategies to prioritize resiliency, equity, and sustainability. Just as important as technological progress is public understanding. Educating communities about how AI interacts with infrastructure can help build the support needed to scale promising innovations. Transparency around how energy is generated, distributed, and consumed—and how AI fits into that equation—will be crucial to building trust and encouraging participation. ... To be clear, AI is not a silver bullet. It won’t replace the need for new investment or hard policy choices. But it can make our systems smarter, more adaptive, and ultimately more sustainable.


AI vs Technical Debt: Is This A Race to the Bottom?

Critically, AI-generated code can carry security liabilities. One alarming study analyzed code suggested by GitHub Copilot across common security scenarios – the result: roughly 40% of Copilot’s suggestions had vulnerabilities. These included classic mistakes like buffer overflows and SQL injection holes. Why so high? The AI was trained on tons of public code – including insecure code – so it can regurgitate bad practices (like using outdated encryption or ignoring input sanitization) just as easily as good ones. If you blindly accept such output, you’re effectively inviting known bugs into your codebase. It doesn’t help that AI is notoriously bad at certain logical tasks (for example, it struggles with complex math or subtle state logic, so it might write code that looks legit but is wrong in edge cases. ... In many cases, devs aren’t reviewing AI-written code as rigorously as their own, and a common refrain when something breaks is, “It is not my code,” implying they feel less responsible since the AI wrote it. That attitude itself is dangerous, if nobody feels accountable for the AI’s code, it slips through code reviews or testing more easily, leading to more bad deployments. The open-source world is also grappling with an influx of AI-generated “contributions” that maintainers describe as low-quality or even spam. Imagine running an open-source project and suddenly getting dozens of auto-generated pull requests that technically add a feature or fix but are riddled with style issues or bugs.


The Future of Manufacturing: Digital Twin in Action

Process digital twins are often confused with traditional simulation tools, but there is an important distinction. Simulations are typically offline models used to test “what-if” scenarios, verify system behaviour, and optimise processes without impacting live operations. These models are predefined and rely on human input to set parameters and ask the right questions. A digital twin, on the other hand, comes to life when connected to real-time operational data. It reflects current system states, responds to live inputs, and evolves continuously as conditions change. This distinction between static simulation and dynamic digital twin is widely recognised across the industrial sector. While simulation still plays a valuable role in system design and planning, the true power of the digital twin lies in its ability to mirror, interpret, and influence operational performance in real time. ... When AI is added, the digital twin evolves into a learning system. AI algorithms can process vast datasets - far beyond what a human operator can manage - and detect early warning signs of failure. For example, if a transformer begins to exhibit subtle thermal or harmonic irregularities, an AI-enhanced digital twin doesn’t just flag it. It assesses the likelihood of failure, evaluates the potential downstream impact, and proposes mitigation strategies, such as rerouting power or triggering maintenance workflows.


Bridging the Gap: How Hybrid Cloud Is Redefining the Role of the Data Center

Today’s hybrid models involve more than merging public clouds with private data centers. They also involve specialized data center solutions like colocation, edge facilities and bare-metal-as-a-service (BMaaS) offerings. That’s the short version of how hybrid cloud and its relationship to data centers are evolving. ... Fast forward to the present, and the goals surrounding hybrid cloud strategies often look quite different. When businesses choose a hybrid cloud approach today, it’s typically not because of legacy workloads or sunk costs. It’s because they see hybrid architectures as the key to unlocking new opportunities ... The proliferation of edge data centers has also enabled simpler, better-performing and more cost-effective hybrid clouds. The more locations businesses have to choose from when deciding where to place private infrastructure and workloads, the more opportunity they have to optimize performance relative to cost. ... Today’s data centers are no longer just a place to host whatever you can’t run on-prem or in a public cloud. They have evolved into solutions that offer specialized services and capabilities that are critical for building high-performing, cost-effective hybrid clouds – but that aren’t available from public cloud providers, and that would be very costly and complicated for businesses to implement on their own.


AI Agents: Managing Risks In End-To-End Workflow Automation

As CIOs map out their AI strategies, it’s becoming clear that agents will change how they manage their organization’s IT environment and how they deliver services to the rest of the business. With the ability of agents to automate a broad swath of end-to-end business processes—learning and changing as they go—CIOs will have to oversee significant shifts in software development, IT operating models, staffing, and IT governance. ... Human-based checks and balances are vital for validating agent-based outputs and recommendations and, if needed, manually change course should unintended consequences—including hallucinations or other errors—arise. “Agents being wrong is not the same thing as humans being wrong,” says Elliott. “Agents can be really wrong in ways that would get a human fired if they made the same mistake. We need safeguards so that if an agent calls the wrong API, it’s obvious to the person overseeing that task that the response or outcome is unreasonable or doesn’t make sense.” These orchestration and observability layers will be increasingly important as agents are implemented across the business. “As different parts of the organization [automate] manual processes, you can quickly end up with a patchwork-quilt architecture that becomes almost impossible to upgrade or rethink,” says Elliott.

Daily Tech Digest - August 04, 2025


Quote for the day:

"You don’t have to be great to start, but you have to start to be great." — Zig Ziglar


Why tomorrow’s best devs won’t just code — they’ll curate, coordinate and command AI

It is not just about writing code anymore — it is about understanding systems, structuring problems and working alongside AI like a team member. That is a tall order. That said, I do believe that there is a way forward. It starts by changing the way we learn. If you are just starting out, avoid relying on AI to get things done. It is tempting, sure, but in the long run, it is also harmful. If you skip the manual practice, you are missing out on building a deeper understanding of how software really works. That understanding is critical if you want to grow into the kind of developer who can lead, architect and guide AI instead of being replaced by it. ... AI-augmented developers will replace large teams that used to be necessary to move a project forward. In terms of efficiency, there is a lot to celebrate about this change — reduced communication time, faster results and higher bars for what one person can realistically accomplish. But, of course, this does not mean teams will disappear altogether. It is just that the structure will change. ... Being technically fluent will still remain a crucial requirement — but it won’t be enough to simply know how to code. You will need to understand product thinking, user needs and how to manage AI’s output. It will be more about system design and strategic vision. For some, this may sound intimidating, but for others, it will also open many doors. People with creativity and a knack for problem-solving will have huge opportunities ahead of them.


The Wild West of Shadow IT

From copy to deck generators, code assistants, and data crunchers, most of them were never reviewed or approved. The productivity gains of AI are huge. Productivity has been catapulted forward in every department and across every vertical. So what could go wrong? Oh, just sensitive data leaks, uncontrolled API connections, persistent OAuth tokens, and no monitoring, audit logs, or privacy policies… and that's just to name a few of the very real and dangerous issues. ... Modern SaaS stacks form an interconnected ecosystem. Applications integrate with each other through OAuth tokens, API keys, and third-party plug-ins to automate workflows and enable productivity. But every integration is a potential entry point — and attackers know it. Compromising a lesser-known SaaS tool with broad integration permissions can serve as a stepping stone into more critical systems. Shadow integrations, unvetted AI tools, and abandoned apps connected via OAuth can create a fragmented, risky supply chain.  ... Let's be honest - compliance has become a jungle due to IT democratization. From GDPR to SOC 2… your organization's compliance is hard to gauge when your employees use hundreds of SaaS tools and your data is scattered across more AI apps than you even know about. You have two compliance challenges on the table: You need to make sure the apps in your stack are compliant and you also need to assure that your environment is under control should an audit take place.


Edge Computing: Not Just for Tech Giants Anymore

A resilient local edge infrastructure significantly enhances the availability and reliability of enterprise digital shopfloor operations by providing powerful on-premises processing as close to the data source as possible—ensuring uninterrupted operations while avoiding external cloud dependency. For businesses, this translates to improved production floor performance and increased uptime—both critical in sectors such as manufacturing, healthcare, and energy. In today’s hyperconnected market, where customers expect seamless digital interactions around the clock, any delay or downtime can lead to lost revenue and reputational damage. Moreover, as AI, IoT, and real-time analytics continue to grow, on-premises OT edge infrastructure combined with industrial-grade connectivity such as private 4.9/LTE or 5G provides the necessary low-latency platform to support these emerging technologies. Investing in resilient infrastructure is no longer optional, it’s a strategic imperative for organisations seeking to maintain operational continuity, foster innovation, and stay ahead of competitors in an increasingly digital and dynamic global economy. ... Once, infrastructure decisions were dominated by IT and boiled down to a simple choice between public and private infrastructure. Today, with IT/OT convergence, it’s all about fit-for-purpose architecture. On-premises edge computing doesn’t replace the cloud — it complements it in powerful ways.


A Reporting Breakthrough: Advanced Reporting Architecture

Advanced Reporting Architecture is based on a powerful and scalable SaaS architecture, which efficiently addresses user-specific reporting requirements by generating all possible reports upfront. Users simply select and analyze the views that matter most to them. The Advanced Reporting Architecture’s SaaS platform is built for global reach and enterprise reliability, with the following features: Modern User Interface: Delivered via AWS, optimized for mobile and desktop, with seamless language switching (English, French, German, Spanish, and more to come). Encrypted Cloud Storage: Ensuring uploaded files and reports are always secure. Serverless Data Processing: High-precision processing that analyzes user-uploaded data and uses data influenced relevant factors to maximizing analytical efficiencies and lower the cost of processing efforts. Comprehensive Asset Management: Support for editable reports, dashboards, presentations, pivots, and custom outputs. Integrated Payments & Accounting: Powered by PayPal and Odoo. Simple Subscription Model: Pay only for what you use—no expensive licenses, hardware, or ongoing maintenance. Some leading-edge reporting platforms, such as PrestoCharts, are based on Advanced Reporting Architecture and have been successful in enabling business users to develop custom reports on the fly. Thus, Advanced Reporting Architecture puts reporting prowess in the hands of the user.


These jobs face the highest risk of AI takeover, according to Microsoft

According to the report -- which has yet to be peer-reviewed -- the most at-risk jobs are those that are based on the gathering, synthesis, and communication of information, at which modern generative AI systems excel: think translators, sales and customer service reps, writers and journalists, and political scientists. The most secure jobs, on the other hand, are supposedly those that depend more on physical labor and interpersonal skills. No AI is going to replace phlebotomists, embalmers, or massage therapists anytime soon. ... "It is tempting to conclude that occupations that have high overlap with activities AI performs will be automated and thus experience job or wage loss, and that occupations with activities AI assists with will be augmented and raise wages," the Microsoft researchers note in their report. "This would be a mistake, as our data do not include the downstream business impacts of new technology, which are very hard to predict and often counterintuitive." The report also echoes what's become something of a mantra among the biggest tech companies as they ramp up their AI efforts: that even though AI will replace or radically transform many jobs, it will also create new ones. ... It's possible that AI could play a role in helping people practice that skill. About one in three Americans are already using the technology to help them navigate a shift in their career, a recent study found.


AIBOMs are the new SBOMs: The missing link in AI risk management

AIBOMs follow the same formats as traditional SBOMs, but contain AI-specific content and metadata, like model family, acceptable usage, AI-specific licenses, etc. If you are a security leader at a large defense contractor, you’d need the ability to identify model developers and their country of origin. This would ensure you are not utilizing models originating from near-peer adversary countries, such as China. ... The first step is inventorying their AI. Utilize AIBOMs to inventory your AI dependencies, monitor what is approved vs. requested vs. denied, and ensure you have an understanding of what is deployed where. The second is to actively seek out AI, rather than waiting for employees to discover it. Organizations need capabilities to identify AI in code and automatically generate resulting AIBOMs. This should be integrated as part of the MLOps pipeline to generate AIBOMs and automatically surface new AI usage as it occurs. The third is to develop and adopt responsible AI policies. Some of them are fairly common-sense: no contributors from OFAC countries, no copylefted licenses, no usage of models without a three-month track record on HuggingFace, and no usage of models over a year old without updates. Then, enforce those policies in an automated and scalable system. The key is moving from reactive discovery to proactive monitoring.


2026 Budgets: What’s on Top of CIOs’ Lists (and What Should Be)

CIO shops are becoming outcome-based, which makes them accountable for what they’re delivering against the value potential, not how many hours were burned. “The biggest challenge seems to be changing every day, but I think it’s going to be all about balancing long-term vision with near-term execution,” says Sudeep George, CTO at software-delivered AI data company iMerit. “Frankly, nobody has a very good idea of what's going to happen in 2026, so everyone's placing bets,” he continues. “This unpredictability is going to be the nature of the beast, and we have to be ready for that.” ... “Reducing the amount of tech debt will always continue to be a focus for my organization,” says Calleja-Matsko. “We’re constantly looking at re-evaluating contracts, terms, [and] whether we have overlapping business capabilities that are being addressed by multiple tools that we have. It's rationalizing, she adds, and what that does is free up investment. How is this vendor pricing its offering? How do we make sure we include enough in our budget based on that pricing model? “That’s my challenge,” Calleja-Matsko emphasizes. Talent is top of mind for 2026, both in terms of attracting it and retaining it. Ultimately though, AI investments are enabling the company to spend more time with customers.


Digital Twin: Revolutionizing the Future of Technology and Industry

T​h​e rise o​f t​h​e cyberspace o​f Things [IoT] has made digital twin technology more relevant​ and accessible. IoT devices ceaselessly garner data from their surroundings a​n​d send i​t t​o t​h​e cloud. T​h​i​s data i​s used t​o produce a​n​d update digital twins o​f those devices o​r systems. I​n smart homes, digital twins help keep an eye on a​n​d see to it lighting, heating, a​n​d appliances. I​n blue-collar settings, IoT sensors track simple machine health a​n​d doing. Moreover, these smart systems c​a​n discover minor issues ahead of time that lead t​o failures. A​s more devices abound, digital twins offer greater conspicuousness a​n​d see to it. ... Despite its benefits, digital twin technology comes w​i​t​h challenges. One major issue i​s t​h​e high cost o​f carrying out. Setting up sensors, software systems, a​n​d data chopping c​a​n be overpriced, particularly f​o​r small businesses. There a​r​e also concerns about the data security system a​n​d privacy. Since digital twins rely o​n straight data flow, any rift c​a​n be risky. Integrating digital twins into existing systems c​a​n be involved. Moreover, i​t requires fine professionals who translate both t​h​e personal systems a​n​d t​h​e labyrinthine digital technologies. A different dispute i​s ensuring t​h​e caliber a​n​d truth o​f t​h​e data. I​f t​h​e input data i​s blemished, the digital twin’s results will also be erratic. Companies must also cope with large amounts o​f data, which requires a stressed I​T base. 


Why Banks Must Stop Pretending They’re Not Tech Companies

The most successful "banks" of the future may not even call themselves banks at all. While traditional institutions cling to century-old identities rooted in vaults and branches, their most formidable competitors are building financial ecosystems from the ground up with APIs, cloud infrastructure, and data-driven decision engines. ... The question isn’t whether banks will become technology companies. It’s whether they’ll make that transition fast enough to remain relevant. And to do this, they must rethink their identity by operating as technology platforms that enable fast, connected, and customer-first experiences. ... This isn’t about layering digital tools on top of legacy infrastructure or launching a chatbot and calling it innovation. It’s about adopting a platform mindset — one that treats technology not as a cost center but as the foundation of growth. A true platform bank is modular, API-first, and cloud-native. It uses real-time data to personalize every interaction. It delivers experiences that are intuitive, fast, and seamless — meeting customers wherever they are and embedding financial services into their everyday lives. ... To keep up with the pace of innovation, banks must adopt skills-based models that prioritize adaptability and continuous learning. Upskilling isn’t optional. It’s how institutions stay responsive to market shifts and build lasting capabilities. And it starts at the top.


Colo space crunch could cripple IT expansion projects

For enterprise IT execs who already have a lot on their plates, the lack of available colocation space represents yet another headache to deal with, and one with major implications. Nobody wants to have to explain to the CIO or the board of directors that the company can’t proceed with digitization efforts or AI projects because there’s no space to put the servers. IT execs need to start the planning process now to get ahead of the problem. ... Demand has outstripped supply due to multiple factors, according to Pat Lynch, executive managing director at CBRE Data Center Solutions. “AI is definitely part of the demand scenario that we see in the market, but we also see growing demand from enterprise clients for raw compute power that companies are using in all aspects of their business.” ... It’s not GPU chip shortages that are slowing down new construction of data centers; it’s power. When a hyperscaler, colo operator or enterprise starts looking for a location to build a data center, the first thing they need is a commitment from the utility company for the required megawattage. According to a McKinsey study, data centers are consuming more power due to the proliferation of the power-hungry GPUs required for AI. Ten years ago, a 30 MW data center was considered large. Today, a 200 MW facility is considered normal.

Daily Tech Digest - August 01, 2025


Quote for the day:

“Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability.” -- Patrick Lencioni


It’s time to sound the alarm on water sector cybersecurity

The U.S. Environmental Protection Agency (EPA) identified 97 drinking water systems serving approximately 26.6 million users as having either critical or high-risk cybersecurity vulnerabilities. Water utility leaders are especially worried about ransomware, malware, and phishing attacks. American Water, the largest water and wastewater utility company in the US, experienced a cybersecurity incident that forced the company to shut down some of its systems. That came shortly after a similar incident forced Arkansas City’s water treatment facility to temporarily switch to manual operations. These attacks are not limited to the US. Recently, UK-based Southern Water admitted that criminals had breached its IT systems. In Denmark, hackers targeted the consumer data services of water provider Fanø Vand, resulting in data theft and operational hijack. These incidents show that this is a global risk, and authorities believe they may be the work of foreign actors. ... The EU is taking a serious approach to cybersecurity, with stricter enforcement and long-term investment in essential services. Through the NIS2 Directive, member states are required to follow security standards, report incidents, and coordinate national oversight. These steps are designed to help utilities strengthen their defenses and improve resilience.


AI and the Democratization of Cybercrime

Cheap, off-the-shelf language models are erasing the technical hurdles. FraudGPT and WormGPT subscriptions start at roughly $200 per month, promising ‘undetectable’ malware, flawless spear-phishing prose, and step-by-step exploit guidance. An aspiring criminal no longer needs the technical knowledge to tweak GitHub proof-of-concepts. They paste a prompt such as ‘Write a PowerShell loader that evades EDR’ and receive usable code in seconds. ... Researchers pushed the envelope further with ReaperAI and AutoAttacker, proof-of-concept ‘agentic’ systems that chain LLM reasoning with vulnerability scanners and exploit libraries. In controlled tests, they breached outdated Web servers, deployed ransomware, and negotiated payment over Tor, without human input once launched. Fully automated cyberattacks are just around the corner. ... Core defensive practice now revolves around four themes. First, reducing the attack surface through relentless automated patching. Second, assuming breach via Zero-Trust segmentation and immutable off-line backups that neuter double-extortion leverage. Third, hardening identity with universal multi-factor authentication (MFA) and phishing-resistant authentication. Finally, exercising incident-response plans with table-top and red-team drills that mirror AI-assisted adversaries.


Digital Twins and AI: Powering the future of creativity at Nestlé

NVIDIA Omniverse on Azure allows for building and seamlessly integrating advanced simulation and generative AI into existing 3D workflows. This cloud-based platform includes APIs and services enabling developers to easily integrate OpenUSD, as well as other sensor and rendering applications. OpenUSD’s capabilities accelerate workflows, teams, and projects when creating 3D assets and environments for large-scale, AI-enabled virtual worlds. The Omniverse Development Workstation on Azure accelerates the process of building Omniverse apps and tools, removing the time and complexity of configuring individual software packages and GPU drivers. With NVIDIA Omniverse on Azure and OpenUSD, marketing teams can create ultra-realistic 3D product previews and environments so that customers can explore a retailer’s products in an engaging and informative way. The platform also can deliver immersive augmented and virtual reality experiences for customers, such as virtually test-driving a car or seeing how new furniture pieces would look in an existing space. For retailers, NVIDIA Omniverse can help create digital twins of stores or in-store displays to simulate and evaluate different layouts to optimize how customers interact with them. 


Why data deletion – not retention – is the next big cyber defence

Emerging data privacy regulations, coupled with escalating cybersecurity risks, are flipping the script. Organisations can no longer afford to treat deletion as an afterthought. From compliance violations to breach fallout, retaining data beyond its lifecycle has a real downside. Many organisations still don’t have a reliable, scalable way to delete data. Policies may exist on paper, but consistent execution across environments, from cloud storage to aging legacy systems, is rare. That gap is no longer sustainable. In fact, failing to delete data when legally required is quickly becoming a regulatory, security, and reputational risk. ... From a cybersecurity perspective, every byte of retained data is a potential breach exposure. In many recent cases, post-incident investigations have uncovered massive amounts of sensitive data that should have been deleted, turning routine breaches into high-stakes regulatory events. But beyond the legal risks, excess data carries hidden operational costs. ... Most CISOs, privacy officers, and IT leaders understand the risks. But deletion is difficult to operationalise. Data lives across multiple systems, formats, and departments. Some repositories are outdated or no longer supported. Others are siloed or partially controlled by third parties. And in many cases, existing tools lack the integration or governance controls needed to automate deletion at scale.


IT Strategies to Navigate the Ever-Changing Digital Workspace

IT teams need to look for flexible, agnostic workspace management solutions that can respond to whether endpoints are running Windows 11, MacOS, ChromeOS, virtual desktops, or cloud PCs. They want to future proof their endpoint investments, knowing that their workspace management must be highly adaptable as business requirements change. To support this disparate endpoint estate, DEX solutions have come to the forefront as they have evolved from a one-off tool for monitoring employee experience to an integrated platform by which administrators can manage endpoints, security tools, and performance remediation. ... In the composite environment IT has the challenge of securing workflows across the endpoint estate, regardless of delivery platform, and doing so without interfering with the employee experience. As the number of both installed and SaaS applications grows, IT teams can leverage automation to streamline patching and other security updates and to monitor SaaS credentials effectively. Automation becomes invaluable in operational efficiency across an increasingly complex application landscape. Another security challenge is the existence of ‘Shadow SaaS’ in which employees, like shadow IT/AI, use unsanctioned tools they believe will help productivity.


Who’s Really Behind the Mask? Combatting Identity Fraud

Effective identity investigations start with asking the right questions and not merely responding to alerts. Security teams need to look deeper: Is this login location normal for the user? Is the device consistent with their normal configuration? Is the action standard for their role? Are there anomalies between systems? These questions create necessary context, enabling defenders to differentiate between standard deviations and hostile activity. Without that investigative attitude, security teams might pursue false positives or overlook actual threats. By structuring identity events with focused, behavior-based questions, analysts can get to the heart of the activity and react with accuracy and confidence. ... Identity theft often hides in plain sight, flourishing in the ordinary gaps between expected and actual behavior. Its deception lies in normalcy, where activity at the surface appears authentic but deviates quietly from established patterns. That’s why trust in a multi-source approach to truth is essential. Connecting insights from network traffic, authentication logs, application access, email interactions, and external integrations can help teams build a context-aware, layered picture of every user. This blended view helps uncover subtle discrepancies, confirm anomalies, and shed light on threats that routine detection will otherwise overlook, minimizing false positives and revealing actual risks.


The hidden crisis behind AI’s promise: Why data quality became an afterthought

Addressing AI data quality requires more human involvement, not less. Organizations need data stewardship frameworks that include subject matter experts who understand not just technical data structures, but business context and implications. These data stewards can identify subtle but crucial distinctions that pure technical analysis might miss. In educational technology, for example, combining parents, teachers, and students into a single “users” category for analysis would produce meaningless insights. Someone with domain expertise knows these groups serve fundamentally different roles and should be analyzed separately. ... Despite the industry’s excitement about new AI model releases, a more disciplined approach focused on clearly defined use cases rather than maximum data exposure proves more effective. Instead of opting for more data to be shared with AI, sticking to the basics and thinking about product concepts produces better results. You don’t want to just throw a lot of good stuff in a can and assume that something good will happen. ... Future AI systems will need “data entitlement” capabilities that automatically understand and respect access controls and privacy requirements. This goes beyond current approaches that require manual configuration of data permissions for each AI application.


Agentic AI is reshaping the API landscape

With agentic AI, APIs evolve from passive endpoints into active dialogue partners. They need to handle more than single, fixed transactions. Instead, APIs must support iterative engagement, where agents adjust their calls based on prior results and current context. This leads to more flexible communication models. For instance, an agent might begin by querying one API to gather user data, process it internally, and then call another endpoint to trigger a workflow. APIs in such environments must be reliable, context-aware and be able to handle higher levels of interaction – including unexpected sequences of calls. One of the most powerful capabilities of agentic AI is its ability to coordinate complex workflows across multiple APIs. Agents can manage chains of requests, evaluate priorities, handle exceptions, and optimise processes in real time. ... Agentic AI is already setting the stage for more responsive, autonomous API ecosystems. Get ready for systems that can foresee workload shifts, self-tune performance, and coordinate across services without waiting for any command from a human. Soon, agentic AI will enable seamless collaboration between multiple AI systems—each managing its own workflow, yet contributing to larger, unified business goals. To support this evolution, APIs themselves must transform. 


Removing Technical Debt Supports Cybersecurity and Incident Response for SMBs

Technical debt is a business’s running tally of aging or defunct software and systems. While workarounds can keep the lights on, they come with risks. For instance, there are operational challenges and expenses associated with managing older systems. Additionally, necessary expenses can accumulate if technical debt is allowed to get out of control, ballooning the costs of a proper fix. While eliminating technical debt is challenging, it’s fundamentally an investment in a business’s future security. Excess technical debt doesn’t just lead to operational inefficiencies. It also creates cybersecurity weaknesses that inhibit threat detection and response. ... “As threats evolve, technical debt becomes a roadblock,” says Jeff Olson, director of software-defined WAN product and technical marketing at Aruba, a Hewlett Packard Enterprise company. “Security protocols and standards have advanced to address common threats, but if you have older technology, you’re at risk until you can upgrade your devices.” Upgrades can prove challenging, however. ... The first step to reducing technical debt is to act now, Olson says. “Sweating it out” for another two or three years will only make things worse. Waiting also stymies innovation, as reducing technical debt can help SMBs take advantage of advanced technologies such as artificial intelligence.


Third-party risk is everyone’s problem: What CISOs need to know now

The best CISOs now operate less like technical gatekeepers and more like orchestral conductors, aligning procurement, legal, finance, and operations around a shared expectation of risk awareness. ... The responsibility for managing third-party risk no longer rests solely on IT security teams. CISOs must transform their roles from technical protectors to strategic leaders who influence enterprise risk management at every level. This evolution involves:Embracing enterprise-wide collaboration: Effective management of third-party risk requires cooperation among diverse departments such as procurement, legal, finance, and operations. By collaborating across the organization, CISOs ensure that third-party risk management is comprehensive and proactive rather than reactive. Integrating risk management into governance frameworks: Third-party risk should be a top agenda item in board meetings and strategic planning sessions. CISOs need to work with senior leadership to embed vendor risk management into the organization’s overall risk landscape. Fostering transparency and accountability: Establishing clear reporting lines and protocols ensures that issues related to third-party risk are promptly escalated and addressed. Accountability should span every level of the organization to ensure effective risk management.

Daily Tech Digest - June 25, 2025


Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein



Why data observability is the missing layer of modern networking

You might hear people use these terms interchangeably, but they’re not the same thing. Visibility is about what you can see – dashboard statistics, logs, uptime numbers, bandwidth figures, the raw data that tells you what’s happening across your network. Observability, on the other hand, is about what that data actually means. It’s the ability to interpret, analyse, and act on those insights. It’s not just about seeing a traffic spike but instead understanding why it happened. It’s not just spotting a latency issue, but knowing which apps are affected and where the bottleneck sits. ... Today, connectivity needs to be smart, agile, and scalable. It’s about building infrastructure that supports cloud, remote work, and everything in between. Whether you’re adding a new site, onboarding a remote team, or launching a cloud-hosted app, your network should be able to scale and respond at speed. Then there’s security, a non-negotiable layer that protects your entire ecosystem. Great security isn’t about throwing up walls, it’s about creating confidence. That means deploying zero trust principles, segmenting access, detecting threats in real time, and encrypting data, without making users lives harder. ... Finally, we come to observability. Arguably the most unappreciated of the three but quickly becoming essential. 


6 Key Security Risks in LLMs: A Platform Engineer’s Guide

Prompt injection is the AI-era equivalent of SQL injection. Attackers craft malicious inputs to manipulate an LLM, bypass safeguards or extract sensitive data. These attacks range from simple jailbreak prompts that override safety rules to more advanced exploits that influence backend systems. ... Model extraction attacks allow adversaries to systematically query an LLM to reconstruct its knowledge base or training data, essentially cloning its capabilities. These attacks often rely on automated scripts submitting millions of queries to map the model’s responses. One common technique, model inversion, involves strategically structured inputs that extract sensitive or proprietary information embedded in the model. Attackers may also use repeated, incremental queries with slight variations to amass a dataset that mimics the original training data. ... On the output side, an LLM might inadvertently reveal private information embedded in its dataset or previously entered user data. A common risk scenario involves users unknowingly submitting financial records or passwords into an AI-powered chatbot, which could then store, retrieve or expose this data unpredictably. With cloud-based LLMs, the risk extends further. Data from one organization could surface in another’s responses.


Adopting Agentic AI: Ethical Governance, Business Impact, Talent Demand, and Data Security

Agentic AI introduces a spectrum of ethical challenges that demand proactive governance. Given its capacity for independent decision-making, there is a heightened need for transparent, accountable, and ethically driven AI models. Ethical governance in Agentic AI revolves around establishing robust policies that govern decision logic, bias mitigation, and accountability. Organizations leveraging Agentic AI must prioritize fairness, inclusivity, and regulatory compliance to avoid unintended consequences. ... The integration of Agentic AI into business ecosystems promises not just automation but strategic enhancement of decision-making. These AI agents are designed to process real-time data, predict market shifts, and autonomously execute decisions that would traditionally require human intervention. In sectors such as finance, healthcare, and manufacturing, Agentic AI is optimizing supply chains, enhancing predictive analytics, and streamlining operations with unparalleled accuracy. ... One of the major concerns surrounding Agentic AI is data security. Autonomous decision-making systems require vast amounts of real-time data to function effectively, raising questions about data privacy, ownership, and cybersecurity. Cyber threats aimed at exploiting autonomous decision-making could have severe consequences, especially in sectors like finance and healthcare.


Unveiling Supply Chain Transformation: IIoT and Digital Twins

Digital twins and IIoTs are evolving technologies that are transforming the digital landscape of supply chain transformation. The IIoT aims to connect to actual physical sensors and actuators. On the other hand, DTs are replica copies that virtually represent the physical components. The DTs are invaluable for testing and simulating design parameters instead of disrupting production elements. ... Contrary to generic IoT, which is more oriented towards consumers, the IIoT enables the communication and interconnection between different machines, industrial devices, and sensors within a supply chain management ecosystem with the aim of business optimization and efficiency. The incubation of IIoT in supply chain management systems aims to enable real-time monitoring and analysis of industrial environments, including manufacturing, logistics management, and supply chain. It boosts efforts to increase productivity, cut downtime, and facilitate information and accurate decision-making. ... A supply chain equipped with IIoT will be a main ingredient in boosting real-time monitoring and enabling informed decision-making. Every stage of the supply chain ecosystem will have the impact of IIoT, like automated inventory management, health monitoring of goods and their tracking, analytics, and real-time response to meet the current marketplace. 


The state of cloud security

An important complicating factor in all this is that customers don’t always know what’s happening in cloud data centers. At the same time, De Jong acknowledges that on-premises environments have the same problem. “There’s a spectrum of issues, and a lot of overlap,” he says, something Wesley Swartelé agrees with: “You have to align many things between on-prem and cloud.” Andre Honders points to a specific aspect of the cloud: “You can be in a shared environment with ten other customers. This means you have to deal with different visions and techniques that do not exist on-premises.” This is certainly the case. There are plenty of worst case scenarios to consider in the public cloud. ... However, a major bottleneck remains the lack of qualified personnel. We hear this all the time when it comes to security. And in other IT fields too, as it happens, meaning one could draw a society-wide conclusion. Nevertheless, staff shortages are perhaps more acute in this sector. Erik de Jong sees society as a whole having similar problems, at any rate. “This is not an IT problem. Just ask painters. In every company, a small proportion of the workforce does most of the work.” Wesley Swartelé agrees it is a challenge for organizations in this industry to find the right people. “Finding a good IT professional with the right mindset is difficult.


As AI reshapes the enterprise, security architecture can’t afford to lag behind

Technology works both ways – it enables the attacker and the smart defender. Cybercriminals are already capitalising on its potential, using open source AI models like DeepSeek and Grok to automate reconnaissance, craft sophisticated phishing campaigns, and produce deepfakes that can convincingly impersonate executives or business partners. What makes this especially dangerous is that these tools don’t just improve the quality of attacks; they multiply their volume. That’s why enterprises need to go beyond reactive defenses and start embedding AI-aware policies into their core security fabric. It starts with applying Zero Trust to AI interactions, limiting access based on user roles, input/output restrictions, and verified behaviour. ... As attackers deploy AI to craft polymorphic malware and mimic legitimate user behaviour, traditional defenses struggle to keep up. AI is now a critical part of the enterprise security toolkit, helping CISOs and security teams move from reactive to proactive threat defense. It enables rapid anomaly detection, surfaces hidden risks earlier in the kill chain, and supports real-time incident response by isolating threats before they can spread. But AI alone isn’t enough. Security leaders must strengthen data privacy and security by implementing full-spectrum DLP, encryption, and input monitoring to protect sensitive data from exposure, especially as AI interacts with live systems. 


Identity Is the New Perimeter: Why Proofing and Verification Are Business Imperatives

Digital innovation, growing cyber threats, regulatory pressure, and rising consumer expectations all drive the need for strong identity proofing and verification. Here is why it is more important than ever:Combatting Fraud and Identity Theft: Criminals use stolen identities to open accounts, secure loans, or gain unauthorized access. Identity proofing is the first defense against impersonation and financial loss. Enabling Secure Digital Access: As more services – from banking to healthcare – go digital, strong remote verification ensures secure access and builds trust in online transactions. Regulatory Compliance: Laws such as KYC, AML, GDPR, HIPAA, and CIPA require identity verification to protect consumers and prevent misuse. Compliance is especially critical in finance, healthcare, and government sectors. Preventing Account Takeover (ATO): Even legitimate accounts are at risk. Continuous verification at key moments (e.g., password resets, high-risk actions) helps prevent unauthorized access via stolen credentials or SIM swapping. Enabling Zero Trust Security: Zero Trust assumes no inherent trust in users or devices. Continuous identity verification is central to enforcing this model, especially in remote or hybrid work environments. 


Why should companies or organizations convert to FIDO security keys?

FIDO security keys significantly reduce the risk of phishing, credential theft, and brute-force attacks. Because they don’t rely on shared secrets like passwords, they can’t be reused or intercepted. Their phishing-resistant protocol ensures authentication is only completed with the correct web origin. FIDO security keys also address insider threats and endpoint vulnerabilities by requiring physical presence, further enhancing protection, especially in high-security environments such as healthcare or public administration. ... In principle, any organization that prioritizes a secure IT infrastructure stands to benefit from adopting FIDO-based multi-factor authentication. Whether it’s a small business protecting customer data or a global enterprise managing complex access structures, FIDO security keys provide a robust, phishing-resistant alternative to passwords. That said, sectors with heightened regulatory requirements, such as healthcare, finance, public administration, and critical infrastructure, have particularly strong incentives to adopt strong authentication. In these fields, the risk of breaches is not only costly but can also have legal and operational consequences. FIDO security keys are also ideal for restricted environments, such as manufacturing floors or emergency rooms, where smartphones may not be permitted. 


Data Warehouse vs. Data Lakehouse

Data warehouses and data lakehouses have emerged as two prominent adversaries in the data storage and analytics markets, each with advantages and disadvantages. The primary difference between these two data storage platforms is that while the data warehouse is capable of handling only structured and semi-structured data, the data lakehouse can store unlimited amounts of both structured and unstructured data – and without any limitations. ... Traditional data warehouses have long supported all types of business professionals in their data storage and analytics endeavors. This approach involves ingesting structured data into a centralized repository, with a focus on warehouse integration and business intelligence reporting. Enter the data lakehouse approach, which is vastly superior for deep-dive data analysis. The lakehouse has successfully blended characteristics of the data warehouse and the data lake to create a scalable and unrestricted solution. The key benefit of this approach is that it enables data scientists to quickly extract insights from raw data with advanced AI tools. ... Although a data warehouse supports BI use cases and provides a “single source of truth” for analytics and reporting purposes, it can also become difficult to manage as new data sources emerge. The data lakehouse has redefined how global businesses store and process data. 


AI or Data Governance? Gartner Says You Need Both

Data and analytics leaders, such as chief data officers, or CDOs, and chief data and analytics officers, or CDAOs, play a significant role in driving their organizations' data and analytics, D&A, successes, which are necessary to show business value from AI projects. Gartner predicts that by 2028, 80% of gen AI business apps will be developed on existing data management platforms. Their analysts say, "This is the best time to be in data and analytics," and CDAOs need to embrace the AI opportunity eyed by others in the C-suite, or they will be absorbed into other technical functions. With high D&A ambitions and AI pilots becoming increasingly ubiquitous, focus is shifting toward consistent execution and scaling. But D&A leaders are overwhelmed with their routine data management tasks and need a new AI strategy. ... "We've never been good at governance, and now AI demands that we be even faster, which means you have to take more risks and be prepared to fail. We have to accept two things: Data will never be fully governed. Secondly, attempting to fully govern data before delivering AI is just not realistic. We need a more practical solution like trust models," Zaidi said. He said trust models provide a trust rating for data assets by examining their value, lineage and risk. They offer up-to-date information on data trustworthiness and are crucial for fostering confidence.