Showing posts with label API. Show all posts
Showing posts with label API. Show all posts

Daily Tech Digest - August 08, 2025


Quote for the day:

“Every adversity, every failure, every heartache carries with it the seed of an equal or greater benefit.” -- Napoleon Hill


Major Enterprise AI Assistants Can Be Abused for Data Theft, Manipulation

In the case of Copilot Studio agents that engage with the internet — over 3,000 instances have been found — the researchers showed how an agent could be hijacked to exfiltrate information that is available to it. Copilot Studio is used by some organizations for customer service, and Zenity showed how it can be abused to obtain a company’s entire CRM. When Cursor is integrated with Jira MCP, an attacker can create malicious Jira tickets that instruct the AI agent to harvest credentials and send them to the attacker. This is dangerous in the case of email systems that automatically open Jira tickets — hundreds of such instances have been found by Zenity. In a demonstration targeting Salesforce’s Einstein, the attacker can target instances with case-to-case automations — again hundreds of instances have been found. The threat actor can create malicious cases on the targeted Salesforce instance that hijack Einstein when they are processed by it. The researchers showed how an attacker could update the email addresses for all cases, effectively rerouting customer communication through a server they control. In a Gemini attack demo, the experts showed how prompt injection can be leveraged to get the gen-AI tool to display incorrect information. 


Who’s Leading Whom? The Evolving Relationship Between Business and Data Teams

As the data boom matured, organizations realized that clear business questions weren’t enough. If we wanted analytics to drive value, we had to build stronger technical teams, including data scientists and machine learning engineers. And we realized something else: we had spent years telling business leaders they needed a working knowledge of data science. Now we had to tell data scientists they needed a working knowledge of the business. This shift in emphasis was necessary, but it didn’t go perfectly. We had told the data teams to make their work useful, usable, and used, and they took that mandate seriously. But in the absence of clear guidance and shared norms, they filled in the gap in ways that didn’t always move the business forward. ... The foundation of any effective business-data partnership is a shared understanding of what actually counts as evidence. Without it, teams risk offering solutions that don’t stand up to scrutiny, don’t translate into action, or don’t move the business forward. A shared burden of proof makes sure that everyone is working from the same assumptions about what’s convincing and credible. This shared commitment is the foundation that allows the organization to decide with clarity and confidence. 


A new worst coder has entered the chat: vibe coding without code knowledge

A clear disconnect then stood out to me between the vibe coding of this app and the actual practiced work of coding. Because this app existed solely as an experiment for myself, the fact that it didn’t work so well and the code wasn’t great didn’t really matter. But vibe coding isn’t being touted as “a great use of AI if you’re just mucking about and don’t really care.” It’s supposed to be a tool for developer productivity, a bridge for nontechnical people into development, and someday a replacement for junior developers. That was the promise. And, sure, if I wanted to, I could probably take the feedback from my software engineer pals and plug it into Bolt. One of my friends recommended adding “descriptive class names” to help with the readability, and it took almost no time for Bolt to update the code.  ... The mess of my code would be a problem in any of those situations. Even though I made something that worked, did it really? Had this been a real work project, a developer would have had to come in after the fact to clean up everything I had made, lest future developers be lost in the mayhem of my creation. This is called the “productivity tax,” the biggest frustration that developers have with AI tools, because they spit out code that is almost—but not quite—right.


From WAF to WAAP: The Evolution of Application Protection in the API Era

The most dangerous attacks often use perfectly valid API calls arranged in unexpected sequences or volumes. API attacks don't break the rules. Instead, they abuse legitimate functionality by understanding the business logic better than the developers who built it. Advanced attacks differ from traditional web threats. For example, an SQL injection attempt looks syntactically different from legitimate input, making it detectable through pattern matching. However, an API attack might consist of perfectly valid requests that individually pass all schema validation tests, with the malicious intent emerging only from their sequence, timing, or cross-endpoint correlation patterns. ... The strategic value of WAAP goes well beyond just keeping attackers out. It's becoming a key enabler for faster, more confident API development cycles. Think about how your API security works today — you build an endpoint, then security teams manually review it, continuous penetration testing (link is external) breaks it, you fix it, and around and around you go. This approach inevitably creates friction between velocity and security. Through continuous visibility and protection, WAAP allows development teams to focus on building features rather than manually hardening each API endpoint. Hence, you can shift the traditional security bottleneck into a security enablement model. 


Scrutinizing LLM Reasoning Models

Assessing CoT quality is an important step towards improving reasoning model outcomes. Other efforts attempt to grasp the core cause of reasoning hallucination. One theory suggests the problem starts with how reasoning models are trained. Among other training techniques, LLMs go through multiple rounds of reinforcement learning (RL), a form of machine learning that teaches the difference between desirable and undesirable behavior through a point-based reward system. During the RL process, LLMs learn to accumulate as many positive points as possible, with “good” behavior yielding positive points and “bad” behavior yielding negative points. While RL is used on non-reasoning LLMs, a large amount of it seems to be necessary to incentivize LLMs to produce CoT, which means that reasoning models generally receive more of it. ... If optimizing for CoT length leads to confused reasoning or inaccurate answers, it might be better to incentivize models to produce shorter CoT. This is the intuition that inspired researchers at Wand AI to see what would happen if they used RL to encourage conciseness and directness rather than verbosity. Across multiple experiments conducted in early 2025, Wand AI’s team discovered a “natural correlation” between CoT brevity and answer accuracy, challenging the widely held notion that the additional time and compute required to create long CoT leads to better reasoning outcomes.


4 regions you didn't know already had age verification laws – and how they're enforced

Australia’s 2021 Online Safety Act was less focused on restricting access to adult content than it was on tackling issues of cyberbullying and online abuse of children, especially on social media platforms. The act introduced a legal framework to allow people to request the removal of hateful and abusive content,  ... Chinese law has required online service providers to implement a real-name registration system for over a decade. In 2012, the Decision on Strengthening Network Information Protection was passed, before being codified into law in 2016 as the Cybersecurity Law. The legislation requires online service providers to collect users’ real names, ID numbers, and other personal information. ... As with the other laws we’ve looked at, COPPA has its fair share of critics and opponents, and has been criticized as being both ineffective and unconstitutional by experts. Critics claim that it encourages users to lie about their age to access content, and allows websites to sidestep the need for parental consent. ... In 2025, the European Commission took the first steps towards creating an EU-wide strategy for age verification on websites when it released a prototype app for a potential age verification solution called a mini wallet, which is designed to be interoperable with the EU Digital Identity Wallet scheme.


The AI-enabled company of the future will need a whole new org chart

Let’s say you’ve designed a multi-agent team of AI products. Now you need to integrate them into your company by aligning them with your processes, values and policies. Of course, businesses onboard people all the time – but not usually 50 different roles at once. Clearly, the sheer scale of agentic AI presents its own challenges. Businesses will need to rely on a really tight onboarding process. The role of the agent onboarding lead creates the AI equivalent of an employee handbook: spelling out what agents are responsible for, how they escalate decisions, and where they must defer to humans. They’ll define trust thresholds, safe deployment criteria, and sandbox environments for gradual rollout. ... Organisational change rarely fails on capability – it fails on culture. The AI Culture & Collaboration Officer protects the human heartbeat of the company through a time of radical transition. As agents take on more responsibilities, human employees risk losing a sense of purpose, visibility, or control. The culture officer will continually check how everyone feels about the transition. This role ensures collaboration rituals evolve, morale stays intact, and trust is continually monitored — not just in the agents, but in the organisation’s direction of travel. It’s a future-facing HR function with teeth.


The Myth of Legacy Programming Languages: Age Doesn't Define Value

Instead of trying to define legacy languages based on one or two subjective criteria, a better approach is to consider the wide range of factors that may make a language count as legacy or not. ... Languages may be considered legacy when no one is still actively developing them — meaning the language standards cease receiving updates, often along with complementary resources like libraries and compilers. This seems reasonable because when a language ceases to be actively maintained, it may stop working with modern hardware platforms. ... Distinguishing between legacy and modern languages based on their popularity may also seem reasonable. After all, if few coders are still using a language, doesn't that make it legacy? Maybe, but there are a couple of complications to consider. One is that measuring the popularity of programming languages in a highly accurate way is impossible — so just because one authority deems a language to be unpopular doesn't necessarily mean developers hate it. The other challenge is that when a language becomes unpopular, it tends to mean that developers no longer prefer it for writing new applications. ... Programming languages sometimes end up in the "legacy" bin when they are associated with other forms of legacy technology — or when they lack associations with more "modern" technologies.


From Data Overload to Actionable Insights: Scaling Viewership Analytics with Semantic Intelligence

Semantic intelligence allows users to find reliable and accurate answers, irrespective of the terminology used in a query. They can interact freely with data and discover new insights by navigating massive databases, which previously required specialized IT involvement, in turn, reducing the workload of already overburdened IT teams. At its core, semantic intelligence lays the foundation for true self-serve analytics, allowing departments across an organization to confidently access information from a single source of truth. ... A semantic layer in this architecture lets you query data in a way that feels natural and enables you to get relevant and precise results. It bridges the gap between complex data structures and user-friendly access. This allows users to ask questions without any need to understand the underlying data intricacies. Standardized definitions and context across the sources streamlines analytics and accelerates insights using any BI tool of choice. ... One of the core functions of semantic intelligence is to standardize definitions and provide a single source of truth. This improves overall data governance with role-based access controls and robust security at all levels. In addition, row- and column-level security at both user and group levels can ensure that access to specific rows is restricted for specific users. 


Why VAPT is now essential for small & medium business security

One misconception, often held by smaller companies, is that they are less likely to be targeted. Industry experts disagree. "You might think, 'Well, we're a small company. Who'd want to hack us?' But here's the hard truth: Cybercriminals love easy targets, and small to medium businesses often have the weakest defences," states a representative from Borderless CS. VAPT combines two different strategies to identify vulnerabilities and potential entry points before malicious actors do. A Vulnerability Assessment scans servers, software, and applications for known problems in a manner similar to a security walkthrough of a physical building. Penetration Testing (often shortened to pen testing) simulates real attacks, enabling businesses to understand how a determined attacker might breach their systems. ... Borderless CS maintains that VAPT is applicable across sectors. "Retail businesses store customer data and payment info. Healthcare providers hold sensitive patient information. Service companies often rely on cloud tools and email systems that are vulnerable. Even a small eCommerce store can be a jackpot for the wrong person. Cyber attackers don't discriminate. In fact, they often prefer smaller businesses because they assume you haven't taken strong security measures. Let's not give them that satisfaction."

Daily Tech Digest - August 01, 2025


Quote for the day:

“Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability.” -- Patrick Lencioni


It’s time to sound the alarm on water sector cybersecurity

The U.S. Environmental Protection Agency (EPA) identified 97 drinking water systems serving approximately 26.6 million users as having either critical or high-risk cybersecurity vulnerabilities. Water utility leaders are especially worried about ransomware, malware, and phishing attacks. American Water, the largest water and wastewater utility company in the US, experienced a cybersecurity incident that forced the company to shut down some of its systems. That came shortly after a similar incident forced Arkansas City’s water treatment facility to temporarily switch to manual operations. These attacks are not limited to the US. Recently, UK-based Southern Water admitted that criminals had breached its IT systems. In Denmark, hackers targeted the consumer data services of water provider Fanø Vand, resulting in data theft and operational hijack. These incidents show that this is a global risk, and authorities believe they may be the work of foreign actors. ... The EU is taking a serious approach to cybersecurity, with stricter enforcement and long-term investment in essential services. Through the NIS2 Directive, member states are required to follow security standards, report incidents, and coordinate national oversight. These steps are designed to help utilities strengthen their defenses and improve resilience.


AI and the Democratization of Cybercrime

Cheap, off-the-shelf language models are erasing the technical hurdles. FraudGPT and WormGPT subscriptions start at roughly $200 per month, promising ‘undetectable’ malware, flawless spear-phishing prose, and step-by-step exploit guidance. An aspiring criminal no longer needs the technical knowledge to tweak GitHub proof-of-concepts. They paste a prompt such as ‘Write a PowerShell loader that evades EDR’ and receive usable code in seconds. ... Researchers pushed the envelope further with ReaperAI and AutoAttacker, proof-of-concept ‘agentic’ systems that chain LLM reasoning with vulnerability scanners and exploit libraries. In controlled tests, they breached outdated Web servers, deployed ransomware, and negotiated payment over Tor, without human input once launched. Fully automated cyberattacks are just around the corner. ... Core defensive practice now revolves around four themes. First, reducing the attack surface through relentless automated patching. Second, assuming breach via Zero-Trust segmentation and immutable off-line backups that neuter double-extortion leverage. Third, hardening identity with universal multi-factor authentication (MFA) and phishing-resistant authentication. Finally, exercising incident-response plans with table-top and red-team drills that mirror AI-assisted adversaries.


Digital Twins and AI: Powering the future of creativity at Nestlé

NVIDIA Omniverse on Azure allows for building and seamlessly integrating advanced simulation and generative AI into existing 3D workflows. This cloud-based platform includes APIs and services enabling developers to easily integrate OpenUSD, as well as other sensor and rendering applications. OpenUSD’s capabilities accelerate workflows, teams, and projects when creating 3D assets and environments for large-scale, AI-enabled virtual worlds. The Omniverse Development Workstation on Azure accelerates the process of building Omniverse apps and tools, removing the time and complexity of configuring individual software packages and GPU drivers. With NVIDIA Omniverse on Azure and OpenUSD, marketing teams can create ultra-realistic 3D product previews and environments so that customers can explore a retailer’s products in an engaging and informative way. The platform also can deliver immersive augmented and virtual reality experiences for customers, such as virtually test-driving a car or seeing how new furniture pieces would look in an existing space. For retailers, NVIDIA Omniverse can help create digital twins of stores or in-store displays to simulate and evaluate different layouts to optimize how customers interact with them. 


Why data deletion – not retention – is the next big cyber defence

Emerging data privacy regulations, coupled with escalating cybersecurity risks, are flipping the script. Organisations can no longer afford to treat deletion as an afterthought. From compliance violations to breach fallout, retaining data beyond its lifecycle has a real downside. Many organisations still don’t have a reliable, scalable way to delete data. Policies may exist on paper, but consistent execution across environments, from cloud storage to aging legacy systems, is rare. That gap is no longer sustainable. In fact, failing to delete data when legally required is quickly becoming a regulatory, security, and reputational risk. ... From a cybersecurity perspective, every byte of retained data is a potential breach exposure. In many recent cases, post-incident investigations have uncovered massive amounts of sensitive data that should have been deleted, turning routine breaches into high-stakes regulatory events. But beyond the legal risks, excess data carries hidden operational costs. ... Most CISOs, privacy officers, and IT leaders understand the risks. But deletion is difficult to operationalise. Data lives across multiple systems, formats, and departments. Some repositories are outdated or no longer supported. Others are siloed or partially controlled by third parties. And in many cases, existing tools lack the integration or governance controls needed to automate deletion at scale.


IT Strategies to Navigate the Ever-Changing Digital Workspace

IT teams need to look for flexible, agnostic workspace management solutions that can respond to whether endpoints are running Windows 11, MacOS, ChromeOS, virtual desktops, or cloud PCs. They want to future proof their endpoint investments, knowing that their workspace management must be highly adaptable as business requirements change. To support this disparate endpoint estate, DEX solutions have come to the forefront as they have evolved from a one-off tool for monitoring employee experience to an integrated platform by which administrators can manage endpoints, security tools, and performance remediation. ... In the composite environment IT has the challenge of securing workflows across the endpoint estate, regardless of delivery platform, and doing so without interfering with the employee experience. As the number of both installed and SaaS applications grows, IT teams can leverage automation to streamline patching and other security updates and to monitor SaaS credentials effectively. Automation becomes invaluable in operational efficiency across an increasingly complex application landscape. Another security challenge is the existence of ‘Shadow SaaS’ in which employees, like shadow IT/AI, use unsanctioned tools they believe will help productivity.


Who’s Really Behind the Mask? Combatting Identity Fraud

Effective identity investigations start with asking the right questions and not merely responding to alerts. Security teams need to look deeper: Is this login location normal for the user? Is the device consistent with their normal configuration? Is the action standard for their role? Are there anomalies between systems? These questions create necessary context, enabling defenders to differentiate between standard deviations and hostile activity. Without that investigative attitude, security teams might pursue false positives or overlook actual threats. By structuring identity events with focused, behavior-based questions, analysts can get to the heart of the activity and react with accuracy and confidence. ... Identity theft often hides in plain sight, flourishing in the ordinary gaps between expected and actual behavior. Its deception lies in normalcy, where activity at the surface appears authentic but deviates quietly from established patterns. That’s why trust in a multi-source approach to truth is essential. Connecting insights from network traffic, authentication logs, application access, email interactions, and external integrations can help teams build a context-aware, layered picture of every user. This blended view helps uncover subtle discrepancies, confirm anomalies, and shed light on threats that routine detection will otherwise overlook, minimizing false positives and revealing actual risks.


The hidden crisis behind AI’s promise: Why data quality became an afterthought

Addressing AI data quality requires more human involvement, not less. Organizations need data stewardship frameworks that include subject matter experts who understand not just technical data structures, but business context and implications. These data stewards can identify subtle but crucial distinctions that pure technical analysis might miss. In educational technology, for example, combining parents, teachers, and students into a single “users” category for analysis would produce meaningless insights. Someone with domain expertise knows these groups serve fundamentally different roles and should be analyzed separately. ... Despite the industry’s excitement about new AI model releases, a more disciplined approach focused on clearly defined use cases rather than maximum data exposure proves more effective. Instead of opting for more data to be shared with AI, sticking to the basics and thinking about product concepts produces better results. You don’t want to just throw a lot of good stuff in a can and assume that something good will happen. ... Future AI systems will need “data entitlement” capabilities that automatically understand and respect access controls and privacy requirements. This goes beyond current approaches that require manual configuration of data permissions for each AI application.


Agentic AI is reshaping the API landscape

With agentic AI, APIs evolve from passive endpoints into active dialogue partners. They need to handle more than single, fixed transactions. Instead, APIs must support iterative engagement, where agents adjust their calls based on prior results and current context. This leads to more flexible communication models. For instance, an agent might begin by querying one API to gather user data, process it internally, and then call another endpoint to trigger a workflow. APIs in such environments must be reliable, context-aware and be able to handle higher levels of interaction – including unexpected sequences of calls. One of the most powerful capabilities of agentic AI is its ability to coordinate complex workflows across multiple APIs. Agents can manage chains of requests, evaluate priorities, handle exceptions, and optimise processes in real time. ... Agentic AI is already setting the stage for more responsive, autonomous API ecosystems. Get ready for systems that can foresee workload shifts, self-tune performance, and coordinate across services without waiting for any command from a human. Soon, agentic AI will enable seamless collaboration between multiple AI systems—each managing its own workflow, yet contributing to larger, unified business goals. To support this evolution, APIs themselves must transform. 


Removing Technical Debt Supports Cybersecurity and Incident Response for SMBs

Technical debt is a business’s running tally of aging or defunct software and systems. While workarounds can keep the lights on, they come with risks. For instance, there are operational challenges and expenses associated with managing older systems. Additionally, necessary expenses can accumulate if technical debt is allowed to get out of control, ballooning the costs of a proper fix. While eliminating technical debt is challenging, it’s fundamentally an investment in a business’s future security. Excess technical debt doesn’t just lead to operational inefficiencies. It also creates cybersecurity weaknesses that inhibit threat detection and response. ... “As threats evolve, technical debt becomes a roadblock,” says Jeff Olson, director of software-defined WAN product and technical marketing at Aruba, a Hewlett Packard Enterprise company. “Security protocols and standards have advanced to address common threats, but if you have older technology, you’re at risk until you can upgrade your devices.” Upgrades can prove challenging, however. ... The first step to reducing technical debt is to act now, Olson says. “Sweating it out” for another two or three years will only make things worse. Waiting also stymies innovation, as reducing technical debt can help SMBs take advantage of advanced technologies such as artificial intelligence.


Third-party risk is everyone’s problem: What CISOs need to know now

The best CISOs now operate less like technical gatekeepers and more like orchestral conductors, aligning procurement, legal, finance, and operations around a shared expectation of risk awareness. ... The responsibility for managing third-party risk no longer rests solely on IT security teams. CISOs must transform their roles from technical protectors to strategic leaders who influence enterprise risk management at every level. This evolution involves:Embracing enterprise-wide collaboration: Effective management of third-party risk requires cooperation among diverse departments such as procurement, legal, finance, and operations. By collaborating across the organization, CISOs ensure that third-party risk management is comprehensive and proactive rather than reactive. Integrating risk management into governance frameworks: Third-party risk should be a top agenda item in board meetings and strategic planning sessions. CISOs need to work with senior leadership to embed vendor risk management into the organization’s overall risk landscape. Fostering transparency and accountability: Establishing clear reporting lines and protocols ensures that issues related to third-party risk are promptly escalated and addressed. Accountability should span every level of the organization to ensure effective risk management.

Daily Tech Digest - July 09, 2025


Quote for the day:

"Whenever you see a successful person you only see the public glories, never the private sacrifices to reach them." -- Vaibhav Shah


Why CIOs see APIs as vital for agentic AI success

API access also goes beyond RAG. It allows agents and their underlying language models not just to retrieve information, but perform database mutations and trigger external actions. This shift allows agents to carry out complex, multi-step workflows that once required multiple human touchpoints. “AI-ready APIs paired with multi-agentic capabilities can unlock a broad range of use cases, which have enterprise workflows at their heart,” says Milind Naphade, SVP of technology and head of AI foundations at Capital One. In addition, APIs are an important bridge out of previously isolated AI systems. ... AI agents can make unprecedented optimizations on the fly using APIs. Gartner reports that PC manufacturer Lenovo uses a suite of autonomous agents to optimize marketing and boost conversions. With the oversight of a planning agent, these agents call APIs to access purchase history, product data, and customer profiles, and trigger downstream applications in the server configuration process. ... But the bigger wins will likely be increased operational efficiency and cost reduction. As Fox describes, this stems from a newfound best-of-breed business agility. “When agentic AI can dynamically reconfigure business processes, using just what’s needed from the best-value providers, you’ll see streamlined operations, reduced complexity, and better overall resource allocation,” she says.


What we can learn about AI from the ‘dead internet theory’

The ‘dead internet theory,’ or the idea that much of the web is now dominated by bots and AI-generated content, is largely speculative. However, the concern behind it is worth taking seriously. The internet is changing, and the content that once made it a valuable source of knowledge is increasingly diluted by duplication, misinformation, and synthetic material. For the development of artificial intelligence, especially large language models (LLMs), this shift presents an existential problem. ... One emerging model for collecting and maintaining this kind of data is Knowledge as a Service (KaaS). Rather than scraping static sources, KaaS creates a living, structured ecosystem of contributions from real users (often experts in their fields) who continuously validate and update content. This approach takes inspiration from open-source communities but remains focused on knowledge creation and maintenance rather than code. KaaS supports AI development with a sustainable, high-quality stream of data that reflects current thinking. It’s designed to scale with human input, rather than in spite of it. ... KaaS helps AI stay relevant by providing fresh, domain-specific input from real users. Unlike static datasets, KaaS adapts as conditions change. It also brings greater transparency, illustrating directly how contributors’ inputs are utilised. This level of attribution represents a step toward more ethical and accountable AI.


The Value of Threat Intelligence in Ensuring DORA Compliance

One of the biggest challenges for security teams today is securing visibility into third-party providers within their ecosystem due to their volume, diversity, and the constant monitoring required. Utilising a Threat Intelligence Platform (TIP) with advanced capabilities can enable a security team to address this gap by monitoring and triaging threats within third-party systems through automation. It can flag potential signs of compromise, vulnerabilities, and risky behaviour, enabling organisations to take pre-emptive action before risks escalate and impact their systems. ... A major aspect of DORA is implementing a robust risk management framework. However, to keep pace with global expansion and new threats and technologies, this framework must be responsive, flexible, and up-to-date. Sourcing, aggregating, and collating threat intelligence data to facilitate this is a time-exhaustive task, and unfeasible for many resource-stretched and siloed security teams. ... From tabletop scenarios to full-scale simulations, these exercises evaluate how well systems, processes, and people can withstand and respond to real-world cyber threats. With an advanced TIP, security teams can leverage customisable workflows to recreate specific operational stress scenarios. These scenarios can be further enhanced by feeding real-world data on attacker behaviours, tactics, and trends, ensuring that simulations reflect actual threats rather than outdated risks.


Why your security team feels stuck

The problem starts with complexity. Security stacks have grown dense, and tools like EDR, SIEM, SOAR, CASB, and DSPM don’t always integrate well. Analysts often need to jump between multiple dashboards just to confirm whether an alert matters. Tuning systems properly takes time and resources, which many teams don’t have. So alerts pile up, and analysts waste energy chasing ghosts. Then there’s process friction. In many organizations, security actions, especially the ones that affect production systems, require multiple levels of approval. On paper, that’s to reduce risk. But these delays can mean missing the window to contain an incident. When attackers move in minutes, security teams shouldn’t be stuck waiting for a sign-off. ... “Security culture is having a bit of a renaissance. Each member of the security team may be in a different place as we undertake this transformation, which can cause internal friction. In the past, security was often tasked with setting and enforcing rules in order to secure the perimeter and ensure folks weren’t doing risky things on their machines. While that’s still part of the job, security and privacy teams today also need to support business growth while protecting customer data and company assets. If business growth is the top priority, then security professionals need new tools and processes to secure those assets.”


Your data privacy is slipping away. Here's why, and what you can do about it

In 2024, the Identity Theft Resource Center reported that companies sent out 1.3 billion notifications to the victims of data breaches. That's more than triple the notices sent out the year before. It's clear that despite growing efforts, personal data breaches are not only continuing, but accelerating. What can you do about this situation? Many people think of the cybersecurity issue as a technical problem. They're right: Technical controls are an important part of protecting personal information, but they are not enough. ... Even the best technology falls short when people make mistakes. Human error played a role in 68% of 2024 data breaches, according to a Verizon report. Organizations can mitigate this risk through employee training, data minimization—meaning collecting only the information necessary for a task, then deleting it when it's no longer needed—and strict access controls. Policies, audits and incident response plans can help organizations prepare for a possible data breach so they can stem the damage, see who is responsible and learn from the experience. It's also important to guard against insider threats and physical intrusion using physical safeguards such as locking down server rooms. ... Despite years of discussion, the U.S. still has no comprehensive federal privacy law. Several proposals have been introduced in Congress, but none have made it across the finish line. 


How To Build Smarter Factories With Edge Computing

According to edge computing experts, these are essentially rugged versions of computers, of any size, purpose-built for their harsh environments. Forget standard form factors; industrial edge devices come in varied configurations specific to the application. This means a device shaped to fit precisely where it’s needed, whether tucked inside a machine or mounted on a factory wall. ... What makes these tough machines intelligent? It’s the software revolution happening on factory floors right now. Historically, industrial computing relied on software specially built to run on bare metal; custom code directly installed on specific machines. While this approach offered reliability and consistent, deterministic performance, it came with significant limitations: slow development cycles, difficult updates and vendor lock-in. ... Communication between smart devices presents unique challenges in industrial environments. Traditional networking approaches often fall short when dealing with thousands of sensors, robots and automated systems. Standard Wi-Fi faces significant constraints in factories where heavy machinery creates electromagnetic interference, and critical operations can’t tolerate wireless dropouts.


Fighting in a cloudy arena

“There are a few primary problems. Number one is that the hyperscalers leverage free credits to get digital startups to build their entire stack on their cloud services,” Cochrane says, adding that as the startups grow, the technical requirements from hyperscalers leave them tied to that provider. “The second thing is also in the relationship they have with enterprises. They say, ‘Hey, we project you will have a $250 million cloud bill, we are going to give you a discount.’ Then, because the enterprise has a contractual vehicle, there’s a mad rush to use as much of the hyperscalers compute as possible because you either lose it or use it. “At the end of the day, it’s like the roach motel. You can check in, but you can’t check out,” he sums up. ... "We are exploring our options to continue to fight against Microsoft’s anti competitive licensing in order to promote choice, innovation, and the growth of the digital economy in Europe." Mark Boost, CEO of UK cloud company Civo, said: ”However they position it, we cannot shy away from what this deal appears to be: a global powerful company paying for the silence of a trade body, and avoiding having to make fundamental changes to their software licensing practices on a global basis.” In the months that followed this decision, things got interesting.


How passkeys work: The complete guide to your inevitable passwordless future

Passkeys are often described as a passwordless technology. In order for passwords to work as a part of the authentication process, the website, app, or other service -- collectively referred to as the "relying party" -- must keep a record of that password in its end-user identity management system. This way, when you submit your password at login time, the relying party can check to see if the password you provided matches the one it has on record for you. The process is the same, whether or not the password on record is encrypted. In other words, with passwords, before you can establish a login, you must first share your secret with the relying party. From that point forward, every time you go to login, you must send your secret to the relying party again. In the world of cybersecurity, passwords are considered shared secrets, and no matter who you share your secret with, shared secrets are considered risky. ... Many of the largest and most damaging data breaches in history might not have happened had a malicious actor not discovered a shared password. In contrast, passkeys also involve a secret, but that secret is never shared with a relying party. Passkeys are a form of Zero Knowledge Authentication (ZKA). The relying party has zero knowledge of your secret, and in order to sign in to a relying party, all you have to do is prove to the relying party that you have the secret in your possession.


Crafting a compelling and realistic product roadmap

The most challenging aspect of roadmap creation is often prioritization. Given finite resources, not everything can be built at once. Effective prioritization requires a clear framework. Common methods include scoring features based on business value versus effort, using frameworks like RICE, or focusing on initiatives that directly address key strategic objectives. Be prepared to say “no” to good ideas that don’t align with current priorities. Transparency in this process is vital. Communicate why certain items are prioritized over others to stakeholders, fostering understanding and buy-in, even when their preferred feature isn’t immediately on the roadmap. ... A product roadmap is a living document, not a static contract. The B2B software landscape is constantly evolving, with new technologies emerging, customer needs shifting, and competitive pressures mounting. A realistic roadmap acknowledges this dynamism. While it provides a clear direction, it should also be adaptable. Plan for regular reviews and updates – quarterly or even monthly – to adjust based on new insights, validated learnings, and changes in the market or business environment. Embrace iterative development and be prepared to pivot or adjust priorities as new information comes to light. 


Are software professionals ready for the AI tsunami?

Modern AI assistants can translate plain-English prompts into runnable project skeletons or even multi-file apps aligned with existing style guides (e.g., Replit). This capability accelerates experimentation and learning, especially when teams are exploring unfamiliar technology stacks. A notable example is MagicSchool.com, a real-world educational platform created using AI-assisted coding workflows, showcasing how AI can powerfully convert conceptual prompts into usable products. These tools enable rapid MVP development that can be tested directly with customers. Once validated, the MVP can then be scaled into a full-fledged product. Rapid code generation can lead to fragile or opaque implementations if teams skip proper reviews, testing, and documentation. Without guardrails, it risks technical debt and poor maintainability. To stay reliable, agile teams must pair AI-generated code with sprint reviews, CI pipelines, automated testing, and strategies to handle evolving features and business needs. Recognising the importance of this shift, tech giants like Amazon (CodeWhisperer) and Google (AlphaCode) are making significant investments in AI development tools, signaling just how central this approach is becoming to the future of software engineering.

Daily Tech Digest - July 08, 2025


Quote for the day:

“If you really want the key to success, start by doing the opposite of what everyone else is doing.” -- Brad Szollose


MCP Vulnerability Exposes the AI Untrusted Code Crisis

Most organizations have rigorous approval processes before allowing arbitrary code to run in their environments whether from open source projects or vendor solutions. Yet with this new wave of tools, we’re simultaneously allowing thousands of employees to constantly update codebases with arbitrary, untrusted AI-generated code or wiring said codebases and applications to mechanisms that can alter or modify their behavior. This isn’t about stopping the use of AI coding agents or sacrificing the massive productivity gains they provide. Instead, we should standardize better ways that allow us to run untrusted code across our software development pipelines. ... As AI development tools gain adoption across enterprises, there is a new class of systems to support them that can execute code on behalf of developers. This includes AI code assistants generating and running code snippets, MCP servers providing AI systems access to local tools and data, automated testing tools executing AI-generated test cases and development agents performing complex multistep operations. Each of these represents a potential code execution pathway that often bypasses traditional security controls. The risk isn’t just that AI-generated code can be inadvertently malicious; it’s that these new systems also create pathways for untrusted code execution.


Is English the next programming language? JetBrains’ CEO says no

JetBrains does need to contend with the fact that many of its users are being threatened by AI replacing them, even if he notes that job displacement isn’t happening at anywhere near the rate some have suggested. Products, languages and IT infrastructure can indeed be made redundant too. We may also add that many layoff rounds use AI as an excuse to make cuts that are simply financially motivated. Still, we need to appreciate that AI is indeed changing the overall landscape. Tasks can be automated, and AI is eagerly shoveling up the developer code that’s freely available online. What about Kotlin specifically?  ... “Here’s my vision. I think programming languages will evolve a lot. I admit that you may not need high level programming languages in the classical sense anymore, but the solution still wouldn’t be English.” Skrygan envisions a middle ground between Kotlin and natural language. Currently, the closest approximation is Kotlin DSL. It’s a design doc that can be compiled as code. Ultimately, like anything digital, it converts into binary at the lowest level. The JetBrains CEO highlights how this is merely a repeat of what we’ve already seen: “People were writing in bytecode and assembler 40 years ago. Now, nobody cares about it anymore. It’s secondary.”

Privacy is blockchain’s missing link—and America’s opportunity to lead

We are at an inflection point. On one hand, blockchain has evolved from an experimental idea into a foundational layer for decentralized finance (DeFi), gaming, cross-border payments, and digital identity. On the other, the absence of privacy threatens to stall its momentum. Without privacy guarantees, Web3 won’t scale into a secure, inclusive internet economy—it will remain a risky, self-surveilling shadow of its potential. It’s not just user safety at stake. Institutional adoption, long seen as the tipping point for crypto’s maturation, is lagging in part because privacy solutions are underdeveloped. Financial institutions and enterprises cannot embrace systems that force them to reveal business-sensitive transactions to competitors and regulators alike. Privacy is not the enemy of compliance; it’s a prerequisite for serious engagement. ... First, policymakers must move past the false binary of privacy versus compliance. These are not mutually exclusive goals. Clear guidelines that embrace advanced cryptography, establish safe harbors for privacy-preserving innovation, and differentiate between consumer protection and surveillance will enable the next generation of secure digital finance. Second, industry leaders need to elevate privacy to the level of consensus mechanisms, scalability, and user experience. 


How scientists are trying to use AI to unlock the human mind

In one of the studies, researchers transformed a large language model into what they refer to as a “foundation model of human cognition.” Out of the box, large language models aren’t great at mimicking human behavior—they behave logically in settings where humans abandon reason, such as casinos. So the researchers fine-tuned Llama 3.1, one of Meta’s open-source LLMs, on data from a range of 160 psychology experiments, which involved tasks like choosing from a set of “slot machines” to get the maximum payout or remembering sequences of letters. ... Accurate predictions of how humans respond in psychology experiments are valuable in and of themselves: For example, scientists could use Centaur to pilot their experiments on a computer before recruiting, and paying, human participants. In their paper, however, the researchers propose that Centaur could be more than just a prediction machine. ... The second of the two Nature studies focuses on minuscule neural networks—some containing only a single neuron—that nevertheless can predict behavior in mice, rats, monkeys, and even humans. Because the networks are so small, it’s possible to track the activity of each individual neuron and use that data to figure out how the network is producing its behavioral predictions. 


New Study Reveals True AI Capabilities And Job Replacement Risk

For business leaders, this framework offers something really valuable: a reality check that cuts through vendor marketing speak. When a sales representative promises their AI solution will "revolutionize your operations," you can now ask pointed questions about which capability levels their system actually achieves and in which specific domains. The gap analysis between current AI capabilities and the requirements of specific business tasks becomes clearer when standardized benchmarks are in place. Consider customer service, where companies are deploying AI chatbots with the enthusiasm of gold rush prospectors. The OECD framework suggests that while AI can handle structured interactions reasonably well, anything requiring genuine social intelligence, nuanced problem-solving, or creative thinking quickly exposes current limitations. This doesn't mean AI isn't useful in customer service, but it helps set realistic expectations about what human oversight will still be necessary. It's the difference between using AI as a sophisticated tool versus expecting it to be a replacement employee. One approach leads to productivity gains; the other leads to customer complaints and public relations disasters.


Why EU Policy Must Catch Up to the Neurotechnology Boom

After conducting a comprehensive analysis of nearly 300 neurotechnology companies worldwide, the Center for Future Generations discovered a surprising trend: among firms fully dedicated to neurotech, consumer firms now outnumber medical ones, making up 60% of the global neurotechnology landscape. And they're proliferating at an unprecedented rate—more than quadrupling in the past decade compared to the previous 25 years. ... EEG, the technology at the heart of this revolution, has been around since the 1920s. It's crude and can't read individual thoughts, but it can detect patterns of brain activity related to focus, fatigue, and even emotional states. And when coupled with artificial intelligence and other personal data—like location, buying behaviors, and biometrics—these patterns can reveal far more about us than we might imagine. ... As this technology moves into the mainstream, the potential for misuse becomes profound. Imagine pre-election advertising that adapts its messaging based on your emotional reaction. Imagine disinformation campaigns tailored to your subconscious fears, measured directly from your brain. Imagine authoritarian governments monitoring emotional responses to propaganda, searching for dissent in citizens' brainwaves. This marks a critical moment for European policymakers.


Enterprises Are Prioritizing Generative AI Spending in 2025

The report, "Generative AI Adoption Index," highlights how organizations are moving gen AI from experimentation to full-scale implementation and offers practical strategies to create business value. CEOs, CTOs and CIOs currently lead most gen AI innovation, but leadership structures are evolving to include specialized AI roles, such as CAIOs, at the highest levels of organizations. ... Along with CAIOs, a thoughtful change management strategy will be critical. The ideal strategy should address operating model changes, data management practices and talent pipelines. Today, just 14% of organizations have a change management strategy, but this will increase to 76% by end of 2026, highlighting a growing recognition of the need for structured adaptation. But a sizable proportion of organizations may still struggle to keep pace with AI-driven transformation, with one in four organizations still lacking a strategy in 2026. ... Third-party vendors are becoming key enablers of gen AI transformation across organizations globally. From supplying outsourced talent to offering services such as cloud computing and storage, these vendors help bridge critical technology and talent gaps. Effective gen AI deployment will depend on strong collaboration between external experts and internal teams. 


AI’s rise demands more from the UK data center market

The growing demand for digital infrastructure, fueled by the surge in AI, has intensified competition for suitable land to build data centers. This scarcity (particularly in London), coupled with the rise in construction and operational costs, makes it difficult to establish data centers in the most efficient and cost-effective manner. Similarly, an over-reliance on well-established technology clusters (such as West London) can increase resource restraints and vulnerability to power outages and downtime. With UK policy frameworks around data centers still evolving, discussions are ongoing around security, energy consumption, and specific regulatory needs. ... Similarly, traditional methods demand a high level of energy consumption to keep AI chips operating at optimal temperatures. Given the energy-intensive nature of air cooling and it being unlikely to keep up with cooling demands, the data center industry is reaching a critical juncture: stifle the capabilities of AI technologies by not integrating effective thermal management, or investing in a more effective, future-thinking approach to cooling? ... The UK’s data center expansion is not just a scaling project, it is a rethinking of what data centers and associated cooling infrastructures must become. 


Why CISOs are making the SASE switch: Fewer vendors, smarter security, better AI guardrails

“SASE is an existential threat to all appliance-based network security companies,” Shlomo Kramer, Cato’s CEO, told VentureBeat. “The vast majority of the market is going to be refactored from appliances to cloud service, which means SASE [is going to be] 80% of the market.” A fundamental architectural transformation is driving that shift. SASE converges traditionally siloed networking and security functions into a single, cloud-native service edge. It combines SD-WAN with critical security capabilities, including secure web gateway (SWG), cloud access security broker (CASB) and ZTNA to enforce policy and protect data regardless of where users or workloads reside. ... The SASE consolidation wave reveals how enterprises are fundamentally rethinking security architecture. With AI attacks exploiting integration gaps instantly, single-vendor SASE has become essential for both protection and operational efficiency. The reasoning is straightforward. Every vendor handoff creates vulnerability. Each integration adds latency. Security leaders know that unified platforms can help eliminate these risks while enabling business velocity. CISOs are increasingly demanding a single console, a single agent and unified policies. 


CISOs urged to fix API risk before regulation forces their hand

The widespread use of APIs to support mobile apps, cloud services, and partner integrations means that the attack surface has changed. But the security practices often haven’t. APIs today handle everything from identity claims and cardholder data to health and account information. Yet in many organizations, they remain outside the scope of standard security programs. ... Oppenheim added that meaningful oversight at the board level doesn’t require technical fluency. “Board-level metrics in such a technically complex space can be difficult to surface meaningfully, but there are still effective ways to guide oversight and investment. Directors should ask which recognised standards (e.g. FAPI) have been adopted or are in the roadmap, and whether the organization has applied a maturity model or framework to benchmark its current posture and track improvements over time.” ... So far, the biggest improvements in API security have come either through direct regulation or industry-led mandates. But pressure is building elsewhere. “Again, organizational size plays a key role,” said Oppenheim. “Larger firms and infrastructure providers are already moving ahead voluntarily – not just in banking, but in payments and identity platforms – because they see strong API security as a necessary foundation for scale and trust.”

Daily Tech Digest - July 02, 2025


Quote for the day:

"Success is not the absence of failure; it's the persistence through failure." -- Aisha Tyle


How cybersecurity leaders can defend against the spur of AI-driven NHI

Many companies don’t have lifecycle management for all their machine identities and security teams may be reluctant to shut down old accounts because doing so might break critical business processes. ... Access-management systems that provide one-time-use credentials to be used exactly when they are needed are cumbersome to set up. And some systems come with default logins like “admin” that are never changed. ... AI agents are the next step in the evolution of generative AI. Unlike chatbots, which only work with company data when provided by a user or an augmented prompt, agents are typically more autonomous, and can go out and find needed information on their own. This means that they need access to enterprise systems, at a level that would allow them to carry out all their assigned tasks. “The thing I’m worried about first is misconfiguration,” says Yageo’s Taylor. If an AI agent’s permissions are set incorrectly “it opens up the door to a lot of bad things to happen.” Because of their ability to plan, reason, act, and learn AI agents can exhibit unpredictable and emergent behaviors. An AI agent that’s been instructed to accomplish a particular goal might find a way to do it in an unanticipated way, and with unanticipated consequences. This risk is magnified even further, with agentic AI systems that use multiple AI agents working together to complete bigger tasks, or even automate entire business processes. 


The silent backbone of 5G & beyond: How network APIs are powering the future of connectivity

Network APIs are fueling a transformation by making telecom networks programmable and monetisable platforms that accelerate innovation, improve customer experiences, and open new revenue streams.  ... Contextual intelligence is what makes these new-generation APIs so attractive. Your needs change significantly depending on whether you’re playing a cloud game, streaming a match, or participating in a remote meeting. Programmable networks can now detect these needs and adjust dynamically. Take the example of a user streaming a football match. With network APIs, a telecom operator can offer temporary bandwidth boosts just for the game’s duration. Once it ends, the network automatically reverts to the user’s standard plan—no friction, no intervention. ... Programmable networks are expected to have the greatest impact in Industry 4.0, which goes beyond consumer applications. ... 5G combined IOT and with network APIs enables industrial systems to become truly connected and intelligent. Remote monitoring of manufacturing equipment allows for real-time maintenance schedule adjustments based on machine behavior. Over a programmable, secure network, an API-triggered alert can coordinate a remote diagnostic session and even start remedial actions if a fault is found.


Quantum Computers Just Reached the Holy Grail – No Assumptions, No Limits

A breakthrough led by Daniel Lidar, a professor of engineering at USC and an expert in quantum error correction, has pushed quantum computing past a key milestone. Working with researchers from USC and Johns Hopkins, Lidar’s team demonstrated a powerful exponential speedup using two of IBM’s 127-qubit Eagle quantum processors — all operated remotely through the cloud. Their results were published in the prestigious journal Physical Review X. “There have previously been demonstrations of more modest types of speedups like a polynomial speedup, says Lidar, who is also the cofounder of Quantum Elements, Inc. “But an exponential speedup is the most dramatic type of speed up that we expect to see from quantum computers.” ... What makes a speedup “unconditional,” Lidar explains, is that it doesn’t rely on any unproven assumptions. Prior speedup claims required the assumption that there is no better classical algorithm against which to benchmark the quantum algorithm. Here, the team led by Lidar used an algorithm they modified for the quantum computer to solve a variation of “Simon’s problem,” an early example of quantum algorithms that can, in theory, solve a task exponentially faster than any classical counterpart, unconditionally.


4 things that make an AI strategy work in the short and long term

Most AI gains came from embedding tools like Microsoft Copilot, GitHub Copilot, and OpenAI APIs into existing workflows. Aviad Almagor, VP of technology innovation at tech company Trimble, also notes that more than 90% of Trimble engineers use Github Copilot. The ROI, he says, is evident in shorter development cycles, and reduced friction in HR and customer service. Moreover, Trimble has introduced AI into their transportation management system, where AI agents optimize freight procurement by dynamically matching shippers and carriers. ... While analysts often lament the difficulty of showing short-term ROI for AI projects, these four organizations disagree — at least in part. Their secret: flexible thinking and diverse metrics. They view ROI not only as dollars saved or earned, but also as time saved, satisfaction increased, and strategic flexibility gained. London says that Upwave listens for customer signals like positive feedback, contract renewals, and increased engagement with AI-generated content. Given the low cost of implementing prebuilt AI models, even modest wins yield high returns. For example, if a customer cites an AI-generated feature as a reason to renew or expand their contract, that’s taken as a strong ROI indicator. Trimble uses lifecycle metrics in engineering and operations. For instance, one customer used Trimble AI tools to reduce the time it took to perform a tunnel safety analysis from 30 minutes to just three.


How IT Leaders Can Rise to a CIO or Other C-level Position

For any IT professional who aspires to become a CIO, the key is to start thinking like a business leader, not just a technologist, says Antony Marceles, a technology consultant and founder of software staffing firm Pumex. "This means taking every opportunity to understand the why behind the technology, how it impacts revenue, operations, and customer experience," he explained in an email. The most successful tech leaders aren't necessarily great technical experts, but they possess the ability to translate tech speak into business strategy, Marceles says, adding that "Volunteering for cross-functional projects and asking to sit in on executive discussions can give you that perspective." ... CIOs rarely have solo success stories; they're built up by the teams around them, Marceles says. "Colleagues can support a future CIO by giving honest feedback, nominating them for opportunities, and looping them into strategic conversations." Networking also plays a pivotal role in career advancement, not just for exposure, but for learning how other organizations approach IT leadership, he adds. Don't underestimate the power of having an executive sponsor, someone who can speak to your capabilities when you’re not there to speak for yourself, Eidem says. "The combination of delivering value and having someone champion that value -- that's what creates real upward momentum."


SLMs vs. LLMs: Efficiency and adaptability take centre stage

SLMs are becoming central to Agentic AI systems due to their inherent efficiency and adaptability. Agentic AI systems typically involve multiple autonomous agents that collaborate on complex, multi-step tasks and interact with environments. Fine-tuning methods like Reinforcement Learning (RL) effectively imbue SLMs with task-specific knowledge and external tool-use capabilities, which are crucial for agentic operations. This enables SLMs to be efficiently deployed for real-time interactions and adaptive workflow automation, overcoming the prohibitive costs and latency often associated with larger models in agentic contexts. ... Operating entirely on-premises ensures that decisions are made instantly at the data source, eliminating network delays and safeguarding sensitive information. This enables timely interpretation of equipment alerts, detection of inventory issues, and real-time workflow adjustments, supporting faster and more secure enterprise operations. SLMs also enable real-time reasoning and decision-making through advanced fine-tuning, especially Reinforcement Learning. RL allows SLMs to learn from verifiable rewards, teaching them to reason through complex problems, choose optimal paths, and effectively use external tools. 


Quantum’s quandary: racing toward reality or stuck in hyperbole?

One important reason is for researchers to demonstrate their advances and show that they are adding value. Quantum computing research requires significant expenditure, and the return on investment will be substantial if a quantum computer can solve problems previously deemed unsolvable. However, this return is not assured, nor is the timeframe for when a useful quantum computer might be achievable. To continue to receive funding and backing for what ultimately is a gamble, researchers need to show progress — to their bosses, investors, and stakeholders. ... As soon as such announcements are made, scientists and researchers scrutinize them for weaknesses and hyperbole. The benchmarks used for these tests are subject to immense debate, with many critics arguing that the computations are not practical problems or that success in one problem does not imply broader applicability. In Microsoft’s case, a lack of peer-reviewed data means there is uncertainty about whether the Majorana particle even exists beyond theory. The scientific method encourages debate and repetition, with the aim of reaching a consensus on what is true. However, in quantum computing, marketing hype and the need to demonstrate advancement take priority over the verification of claims, making it difficult to place these announcements in the context of the bigger picture.


Ethical AI for Product Owners and Product Managers

As the product and customer information steward, the PO/PM must lead the process of protecting sensitive data. The Product Backlog often contains confidential customer feedback, competitive analysis, and strategic plans that cannot be exposed. This guardrail requires establishing clear protocols for what data can be shared with AI tools. A practical first step is to lead the team in a data classification exercise, categorizing information as Public, Internal, or Restricted. Any data classified for internal use, such as direct customer quotes, must be anonymized before being used in an AI prompt. ... AI is proficient at generating text but possesses no real-world experience, empathy, or strategic insight. This guardrail involves proactively defining the unique, high-value work that AI can assist but never replace. Product leaders should clearly delineate between AI-optimal tasks, creating first drafts of technical user stories, summarizing feedback themes, or checking for consistency across Product Backlog items and PO/PM-essential areas. These human-centric responsibilities include building genuine empathy through stakeholder interviews, making difficult strategic prioritization trade-offs, negotiating scope, resolving conflicting stakeholder needs, and communicating the product vision. By modeling this partnership and using AI as an assistant to prepare for strategic work, the PO/PM reinforces that their core value lies in strategy, relationships, and empathy.


Sharded vs. Distributed: The Math Behind Resilience and High Availability

In probability theory, independent events are events whose outcomes do not affect each other. For example, when throwing four dice, the number displayed on each dice is independent of the other three dice. Similarly, the availability of each server in a six-node application-sharded cluster is independent of the others. This means that each server has an individual probability of being available or unavailable, and the failure of one server is not affected by the failure or otherwise of other servers in the cluster. In reality, there may be shared resources or shared infrastructure that links the availability of one server to another. In mathematical terms, this means that the events are dependent. However, we consider the probability of these types of failures to be low, and therefore, we do not take them into account in this analysis.  ... Traditional architectures are limited by single-node failure risk. Application-level sharding compounds this problem because if any node goes down, its shard and therefore the total system becomes unavailable. In contrast, distributed databases with quorum-based consensus (like YugabyteDB) provide fault tolerance and scalability, enabling higher resilience and improved availability.


How FinTechs are turning GRC into a strategic enabler

The misconception that risk management and innovation exist in tension is one that modern FinTechs must move beyond. At its core, cybersecurity – when thoughtfully integrated – serves not as a brake but as an enabler of innovation. The key is to design governance structures that are both intelligent and adaptive (and resilient in itself). The foundation lies in aligning cybersecurity risk management with the broader business objective: enablement. This means integrating security thinking early in the innovation cycle, using standardized interfaces, expectations, and frameworks that don’t obstruct, but rather channel innovation safely. For instance, when risk statements are defined consistently across teams, decisions can be made faster and with greater confidence. Critically, it starts with the threat model. A well-defined, enterprise-level threat model is the compass that guides risk assessments and controls where they matter most. Yet many companies still operate without a clear articulation of their own threat landscape, leaving their enterprise risk strategies untethered from reality. Without this grounding, risk management becomes either overly cautious or blindly permissive, or a bit of both. We place a strong emphasis on bridging the traditional silos between GRC, IT Security, Red Teaming, and Operational teams.

Daily Tech Digest - June 03, 2025


Quote for the day:

"Keep your fears to yourself, but share your courage with others." -- Robert Louis Stevenson


Is it Time to Accept that the Current Role of the CISO Has Failed?

First of all, it was never conceived as a true C-level role. It probably originated in the minds of some organisation consultants, but it never developed any true C-level weight. Even if it may hurt some, it is my opinion that it was very rarely given to people with true C-level potential. Second, it was almost always given to technologists by trade or background, although the underlying matter is unequivocally cross-functional and has always been: You cannot be successful around identity and access management for example without the involvement of HR and business units, and the ability to reach credibly towards them. ... It has aggregated a mixed set of responsibilities and accountabilities without building up the right organisational and managerial momentum, and many CISOs are simply being set up to fail: The role has simply become too complex to carry for the profile of the people it attracts. To break this spiral, the logic is now to split the role, stripping off the managerial layers it has accumulated over the years and refocusing the role of the CISO on its native technical content so that it can lead effectively and efficiently at that level, while at the same time bringing up a CSO role able to reach across business, IT and support function to take in charge the level of corporate complexity cybersecurity is now amalgamating in large firms.


How to Fortify Your Business’s Online Infrastructure Against Downtime

The first step to protecting your online infrastructure against downtime is to assess just how much downtime risk is viable for your business. Understanding how much downtime you can realistically afford is important for developing a sound IT strategy. Your viable downtime limit will define your tolerance to risk and allow you to direct your resources toward systems that keep your systems running optimally as far as possible. The average accepted downtime rate for a website is just 0.05%. That means your systems should experience uptime at least 99.95% of the time. If you have a low risk tolerance – say, for instance, if you rely on an ecommerce platform to generate revenue – investing in IT continuity technology is essential for keeping downtime minimal. ... The first step to safeguarding your organization against cyberattacks is to regularly audit your network security measures. This helps to spot vulnerabilities and address them, ensuring your IT systems are always protected against continuously advancing threats. Begin by creating a map of your existing network infrastructure, including all of its user access points, hardware, and software. This map will allow you to keep track of changes and quickly identify unauthorized changes and additions.


Private cloud still matters—but it doesn’t matter most

Large enterprises will maintain significant on-premises footprints for the foreseeable future, for all the reasons we’ve discussed. The enterprise IT landscape in 2025 is undeniably hybrid and likely always will be. But it’s equally undeniable that the center of gravity for innovation has shifted. When a new opportunity emerges—say, deploying a breakthrough AI model or scaling a customer-facing app to millions of users overnight—companies aren’t spinning up a new on-premises cluster to meet the moment. They’re tapping the virtually unlimited resources of AWS, Azure, Google, or edge networks like Cloudflare. They’re doing so because cloud offers experimentation without hardware procurement, and success isn’t gated by how many servers you happen to own. Private clouds excel at running the known and steady. Public clouds excel at unleashing the unknown and extraordinary. As we reach a cloud/on-prem equilibrium, this division of labor is becoming clearer. The day-to-day workloads that keep the business running may happily live in a familiar private cloud enclave. But the industry-defining projects, the ones leaders hope will define the business’s future, gravitate to infrastructure that can stretch to any size, in any region, at a moment’s notice. 


Why Generative AI Needs Architecture, Not Just APIs

The root of the problem often lies in treating gen AI as an add-on to legacy systems rather than embedding it into core operations. This leads to inconsistent implementation, unclear ownership and limited returns. To deliver meaningful outcomes, organizations must start by identifying areas where gen AI can enhance decisions, such as customer engagement, service workflows and regulatory compliance. ... When the focus is only on launching siloed applications, organizations may move fast initially, but they end up with systems that are difficult to scale, integrate or adapt. That's where architecture-centric thinking becomes critical. A strong architectural foundation built on modularity, interoperability and scalability ensures that future applications don't just add features but add value as one needs to build to last. This means building platforms that support change, not just one-off projects. It's also about fostering collaboration between business and IT, so decisions can be made with both speed and stability in mind. ... The "situational layer cake" architecture enables enterprises to build applications in distinct layers, such as enterprisewide, division-specific and implementation layers, facilitating a balance between reusability and customization. This structure allows the creation of reusable components that can be tailored to specific business contexts without redundant coding, streamlining operations and reducing complexity.


Scattered Spider: Understanding Help Desk Scams and How to Defend Your Organization

The goal of a help desk scam is to get the help desk operator to reset the credentials and/or MFA used to access an account so the attacker can take control of it. They'll use a variety of backstories and tactics to get that done, but most of the time it's as simple as saying "I've got a new phone, can you remove my existing MFA and allow me to enroll a new one?" From there, the attacker is then sent an MFA reset link via email or SMS. Usually, this would be sent to, for example, a number on file — but at this point, the attacker has already established trust and bypassed the help desk process to a degree. So asking "Can you send it to this email address" or "I've actually got a new number too, can you send it to…" gets this sent directly to the attacker. ... But, help desks are a target for a reason. They're "helpful" by nature. This is usually reflected in how they're operated and performance measured — delays won't help you to hit those SLAs! Ultimately, a process only works if employees are willing to adhere to it — and can't be socially engineered to break it. Help desks that are removed from day-to-day operations are also inherently susceptible to attacks where employees are impersonated. But, the attacks we're experiencing at the moment should give security stakeholders plenty of ammunition as to why help desk reforms are vital to securing the business.


Banking on intelligence: How AI is powering the next evolution of financial services

With constantly evolving regulations, financial institutions need stringent compliance measures to avoid penalties and disruptions. AI steps in as a powerful ally, automating compliance tasks to slash manual workloads and boost reporting accuracy. AI agents digest regulatory data, churn out compliance reports, and handle KYC/AML validations—cutting errors while speeding up the process. While implementing the changes, financial institutions must comply with data localisation mandates and ensure AI solutions are hosted within India. To mitigate data privacy risks, personally identifiable information (PII) is anonymised, and AI is deployed within Virtual Private Cloud environments. AI systems automate document verification, ensuring consistent validation and improving audit readiness. ... AI-enabled Underwriting Workbench is an immensely helpful tool for streamlining documentation and offering a single-window interface. GenAI further enhances credit assessments by analysing alternative data—like transaction history, social media, and employment records—offering a comprehensive view of an applicant’s financial health. This enables banks to make inclusive, risk-aware lending decisions. Agentic AI further calibrates the process by automating tasks like application assessments and borrower information verifications, enabling near-instant loan decisions with minimal human intervention.


Why the end of Google as we know it could be your biggest opportunity yet

Now, before you think I'm writing Google's obituary, let me be clear. Like I've said before, I'm confident they'll figure it out, even if that means changing their business model. That said, if your business depends on Google in any way, whether it's your business profile, reviews, SEO, or products like Ad Manager to drive traffic, you need to pay attention to what's happening. ... The Department of Justice and several states are suing Google's parent company, Alphabet, arguing that its exclusive deals with companies like Apple are anticompetitive and potentially monopolistic. Basically, Google is paying billions to be the default search engine on Apple devices, effectively shutting out any real competition. The ruling in this case could break up their reported $20 billion-a-year agreement. ... Long story short, the way people discover, research, and choose businesses is changing one AI update at a time, but it's essential to note that people are still searching, just not in the same places they used to. That nuance is critical to understanding your next move. As more users turn to AI tools like ChatGPT and Perplexity for answers, traditional search engines are no longer the only gateway to your business. This shift in behavior over time will result in less traffic to your product or service. 


How global collaboration is hitting cybercriminals where it hurts

Collaboration and intelligence sharing is at the heart of our approach to tackling the threat within the NCA, and we enjoy relationships with partners across the public and private sector both nationally and internationally. We’re united and motivated, in many ways, by a common mission. Some of these are formalised law enforcement relationships that we have had for a long time – for example, I was the NCA’s embed to the FBI in Washington DC for a number of years. But, it is not just limited to the US – the NCA is lucky to enjoy brilliant relationships with the ‘five eyes’ countries and partners across Europe and beyond in the fight against cybercrime. ... In the NCA, we are predominantly focused on financially motivated cybercrime, with ransomware as a main area of focus given how significant the threat it poses to the UK. We recognise that some cybercrime groups have connections to the Russian State, but assess that these type of deep-rooted relationships are likely to be the exception as opposed to the norm. When targeting the cybercrime threat, we have been focused on associating cost and risk to the threat actors who seek to cause harm to us and our allies, and we achieve this in a number of different ways. The NCA-led disruption of LockBit in 2024 was successful in undermining trust between members of the group, as well as any trust that victims might have had in LockBit keeping their word. 


Future-Proofing AI: Repeating Mistakes or Learning From the Past?

Are the enterprises rushing to deploy new open source AI projects taking the necessary security measures to isolate them from the rest of their infrastructure? Or are they disregarding recent open source security history and trusting them by default? Alarmingly, there are also reports that China-, North Korea- and Russia-based cybercriminal groups are actively targeting both physical and AI infrastructure while leveraging AI-generated malware to exploit vulnerabilities more efficiently. ... Next-generation AI infrastructure cannot be beholden to performance penalties that arise from using today’s solutions to create true, secure, multitenant environments. By combining the best aspects of bare-metal performance with container-like deployment models, organizations can build systems that deliver both speed and convenience. ... We cannot build a solid future if we ignore the wisdom of the past. The foundations of computing security, resource management and operational efficiency were laid decades ago by pioneers who had to make every CPU cycle and memory byte count. Their lessons are more relevant now than ever as we build systems that consume unprecedented computational resources. The organizations that will outlast in the AI era won’t necessarily be those with the largest infrastructure investments or the trendiest technology stacks. 


Eight ways storage IT pros can evolve in the age of analytics and AI

Large organizations are spending millions of dollars annually on data storage, backups, and disaster recovery. On balance, there’s nothing wrong with that since data is the center of everything today – but all data should not be treated the same. Using cost modeling tools, the storage manager can enter actual storage costs to determine upfront new projected storage costs and actual usable capacity, based on data growth rates. These costs must factor in backups and disaster recovery, which can be 3X of storage spending, and should compare on-premises versus cloud models. An unstructured data management system that indexes all data across all storage can supply metrics on data volumes, costs, and predicted costs, and then model plans for moving less-active data to lower-cost archival storage, such as in the cloud. ... Storage teams must mitigate ransomware risks associated with file data. One way to do this is by implementing hybrid tiering strategies that offload infrequently accessed (cold) files to immutable cloud storage, which reduces the active attack surface by as much as 70 or 80 percent. Immutable storage ensures that once data is written, it cannot be altered or deleted, providing a robust defense against ransomware attempts to encrypt or corrupt files.