Daily Tech Digest - August 09, 2025


Quote for the day:

“Develop success from failures. Discouragement and failure are two of the surest stepping stones to success.” -- Dale Carnegie


Is ‘Decentralized Data Contributor’ the Next Big Role in the AI Economy?

Training AI models requires real-world, high-quality, and diverse data. The problem is that the astronomical demand is slowly outpacing the available sources. Take public datasets as an example. Not only is this data overused, but it’s often restricted to avoid privacy or legal concerns. There’s also a huge issue with geographic or spatial data gaps where the information is incomplete regarding specific regions, which can and will lead to inaccuracies or biases with AI models. Decentralized contributors can help bust these challenges. ... Even though a large part of the world’s population has no problem with passively sharing data when browsing the web, due to the relative infancy of decentralized systems, active data contribution may seem to many like a bridge too far. Anonymized data isn’t 100% safe. Determined threat actor parties can sometimes re-identify individuals from unnamed datasets. The concern is valid, which is why decentralized projects working in the field must adopt privacy-by-design architectures where privacy is a core part of the system instead of being layered on top after the fact. Zero-knowledge proofs is another technique that can reduce privacy risks by allowing contributors to prove the validity of the data without exposing any information. For example, demonstrating their identity meets set criteria without divulging anything identifiable.


The ROI of Governance: Nithesh Nekkanti on Taming Enterprise Technical Debt

A key symptom of technical debt is rampant code duplication, which inflates maintenance efforts and increases the risk of bugs. A multi-pronged strategy focused on standardization and modularity proved highly effective, leading to a 30% reduction in duplicated code. This initiative went beyond simple syntax rules to forge a common development language, defining exhaustive standards for Apex and Lightning Web Components. By measuring metrics like technical debt density, teams can effectively track the health of their codebase as it evolves. ... Developers may perceive stricter quality gates as a drag on velocity, and the task of addressing legacy code can seem daunting. Overcoming this resistance requires clear communication and a focus on the long-term benefits. "Driving widespread adoption of comprehensive automated testing and stringent code quality tools invariably presents cultural and operational challenges," Nekkanti acknowledges. The solution was to articulate a compelling vision. ... Not all technical debt is created equal, and a mature governance program requires a nuanced approach to prioritization. The PEC developed a technical debt triage framework to systematically categorize issues based on type, business impact, and severity. This structured process is vital for managing a complex ecosystem, where a formal Technical Governance Board (TGB) can use data to make informed decisions about where to invest resources.


Why Third-Party Risk Management (TPRM) Can’t Be Ignored in 2025

In today’s business world, no organization operates in a vacuum. We rely on vendors, suppliers, and contractors to keep things running smoothly. But every connection brings risk. Just recently, Fortinet made headlines as threat actors were found maintaining persistent access to FortiOS and FortiProxy devices using known vulnerabilities—while another actor allegedly offered a zero-day exploit for FortiGate firewalls on a dark web forum. These aren’t just IT problems—they’re real reminders of how vulnerabilities in third-party systems can open the door to serious cyber threats, regulatory headaches, and reputational harm. That’s why Third-Party Risk Management (TPRM) has become a must-have, not a nice-to-have. ... Think of TPRM as a structured way to stay on top of the risks your third parties, suppliers and vendors might expose you to. It’s more than just ticking boxes during onboarding—it’s an ongoing process that helps you monitor your partners’ security practices, compliance with laws, and overall reliability. From cloud service providers, logistics partners, and contract staff to software vendors, IT support providers, marketing agencies, payroll processors, data analytics firms, and even facility management teams—if they have access to your systems, data, or customers, they’re part of your risk surface. 


Ushering in a new era of mainframe modernization

One of the key challenges in modern IT environments is integrating data across siloed systems. Mainframe data, despite being some of the most valuable in the enterprise, often remains underutilized due to accessibility barriers. With a z17 foundation, software data solutions can more easily bridge critical systems, offering unprecedented data accessibility and observability. For CIOs, this is an opportunity to break down historical silos and make real-time mainframe data available across cloud and distributed environments without compromising performance or governance. As data becomes more central to competitive advantage, the ability to bridge existing and modern platforms will be a defining capability for future-ready organizations. ... For many industries, mainframes continue to deliver unmatched performance, reliability, and security for mission-critical workloads—capabilities that modern enterprises rely on to drive digital transformation. Far from being outdated, mainframes are evolving through integration with emerging technologies like AI, automation, and hybrid cloud, enabling organizations to modernize without disruption. With decades of trusted data and business logic already embedded in these systems, mainframes provide a resilient foundation for innovation, ensuring that enterprises can meet today’s demands while preparing for tomorrow’s challenges.


Fighting Cyber Threat Actors with Information Sharing

Effective threat intelligence sharing creates exponential defensive improvements that extend far beyond individual organizational benefits. It not only raises the cost and complexity for attackers but also lowers their chances of success. Information Sharing and Analysis Centers (ISACs) demonstrate this multiplier effect in practice. ISACs are, essentially, non-profit organizations that provide companies with timely intelligence and real-world insights, helping them boost their security. The success of existing ISACs has also driven expansion efforts, with 26 U.S. states adopting the NAIC Model Law to encourage information sharing in the insurance sector. ... Although the benefits of information sharing are clear, actually implementing them is a different story. Common obstacles include legal issues regarding data disclosure, worries over revealing vulnerabilities to competitors, and the technical challenge itself – evidently, devising standardized threat intelligence formats is no walk in the park. And yet it can certainly be done. Case in point: the above-mentioned partnership between CrowdStrike and Microsoft. Its success hinges on its well-thought-out governance system, which allows these two business rivals to collaborate on threat attribution while protecting their proprietary techniques and competitive advantages. 


The Ultimate Guide to Creating a Cybersecurity Incident Response Plan

Creating a fit-for-purpose cyber incident response plan isn’t easy. However, by adopting a structured approach, you can ensure that your plan is tailored for your organisational risk context and will actually help your team manage the chaos that ensues a cyber attack. In our experience, following a step-by-step process to building a robust IR plan always works. Instead of jumping straight into creating a plan, it’s best to lay a strong foundation with training and risk assessment and then work your way up. ... Conducting a cyber risk assessment before creating a Cybersecurity Incident Response Plan is critical. Every business has different assets, systems, vulnerabilities, and exposure to risk. A thorough risk assessment identifies what assets need the most protection. The assets could be customer data, intellectual property, or critical infrastructure. You’ll be able to identify where the most likely entry points for attackers may be. This insight ensures that the incident response plan is tailored and focused on the most pressing risks instead of being a generic checklist. A risk assessment will also help you define the potential impact of various cyber incidents on your business. You can prioritise response strategies based on what incidents would be most damaging. Without this step, response efforts may be misaligned or inadequate in the face of a real threat.


How to Become the Leader Everyone Trusts and Follows With One Skill

Leaders grounded in reason have a unique ability; they can take complex situations and make sense of them. They look beyond the surface to find meaning and use logic as their compass. They're able to spot patterns others might miss and make clear distinctions between what's important and what's not. Instead of being guided by emotion, they base their decisions on credibility, relevance and long-term value. ... The ego doesn't like reason. It prefers control, manipulation and being right. At its worst, it twists logic to justify itself or dominate others. Some leaders use data selectively or speak in clever soundbites, not to find truth but to protect their image or gain power. But when a leader chooses reason, something shifts. They let go of defensiveness and embrace objectivity. They're able to mediate fairly, resolve conflicts wisely and make decisions that benefit the whole team, not just their own ego. This mindset also breaks down the old power structures. Instead of leading through authority or charisma, leaders at this level influence through clarity, collaboration and solid ideas. ... Leaders who operate from reason naturally elevate their organizations. They create environments where logic, learning and truth are not just considered as values, they're part of the culture. This paves the way for innovation, trust and progress. 


Why enterprises can’t afford to ignore cloud optimization in 2025

Cloud computing has long been the backbone of modern digital infrastructure, primarily built around general-purpose computing. However, the era of one-size-fits-all cloud solutions is rapidly fading in a business environment increasingly dominated by AI and high-performance computing (HPC) workloads. Legacy cloud solutions struggle to meet the computational intensity of deep learning models, preventing organizations from fully realizing the benefits of their investments. At the same time, cloud-native architectures have become the standard, as businesses face mounting pressure to innovate, reduce time-to-market, and optimize costs. Without a cloud-optimized IT infrastructure, organizations risk losing key operational advantages—such as maximizing performance efficiency and minimizing security risks in a multi-cloud environment—ultimately negating the benefits of cloud-native adoption. Moreover, running AI workloads at scale without an optimized cloud infrastructure leads to unnecessary energy consumption, increasing both operational costs and environmental impact. This inefficiency strains financial resources and undermines corporate sustainability goals, which are now under greater scrutiny from stakeholders who prioritize green initiatives.


Data Protection for Whom?

To be clear, there is no denying that a robust legal framework for protecting privacy is essential. In the absence of such protections, both rich and poor citizens face exposure to fraud, data theft and misuse. Personal data leakages – ranging from banking details to mobile numbers and identity documents – are rampant, and individuals are routinely subjected to financial scams, unsolicited marketing and phishing attacks. Often, data collected for one purpose – such as KYC verification or government scheme registration – finds its way into other hands without consent. ... The DPDP Act, in theory, establishes strong penalties for violations. However, the enforcement mechanisms under the Act are opaque. The composition and functioning of the Data Protection Board – a body tasked with adjudicating complaints and imposing penalties – are entirely controlled by the Union government. There is no independent appointments process, no safeguards against arbitrary decision-making, and no clear procedure for appeals. Moreover, there is a genuine worry that smaller civil society initiatives – such as grassroots surveys, independent research and community-based documentation efforts – will be priced out of existence. The compliance costs associated with data processing under the new framework, including consent management, data security audits and liability for breaches, are likely to be prohibitive for most non-profit and community-led groups.


Stargate’s slow start reveals the real bottlenecks in scaling AI infrastructure

“Scaling AI infrastructure depends less on the technical readiness of servers or GPUs and more on the orchestration of distributed stakeholders — utilities, regulators, construction partners, hardware suppliers, and service providers — each with their own cadence and constraints,” Gogia said. ... Mazumder warned that “even phased AI infrastructure plans can stall without early coordination” and advised that “enterprises should expect multi-year rollout horizons and must front-load cross-functional alignment, treating AI infra as a capital project, not a conventional IT upgrade.” ... Given the lessons from Stargate’s delays, analysts recommend a pragmatic approach to AI infrastructure planning. Rather than waiting for mega-projects to mature, Mazumder emphasized that “enterprise AI adoption will be gradual, not instant and CIOs must pivot to modular, hybrid strategies with phased infrastructure buildouts.” ... The solution is planning for modular scaling by deploying workloads in hybrid and multi-cloud environments so progress can continue even when key sites or services lag. ... For CIOs, the key lesson is to integrate external readiness into planning assumptions, create coordination checkpoints with all providers, and avoid committing to go-live dates that assume perfect alignment.

Daily Tech Digest - August 08, 2025


Quote for the day:

“Every adversity, every failure, every heartache carries with it the seed of an equal or greater benefit.” -- Napoleon Hill


Major Enterprise AI Assistants Can Be Abused for Data Theft, Manipulation

In the case of Copilot Studio agents that engage with the internet — over 3,000 instances have been found — the researchers showed how an agent could be hijacked to exfiltrate information that is available to it. Copilot Studio is used by some organizations for customer service, and Zenity showed how it can be abused to obtain a company’s entire CRM. When Cursor is integrated with Jira MCP, an attacker can create malicious Jira tickets that instruct the AI agent to harvest credentials and send them to the attacker. This is dangerous in the case of email systems that automatically open Jira tickets — hundreds of such instances have been found by Zenity. In a demonstration targeting Salesforce’s Einstein, the attacker can target instances with case-to-case automations — again hundreds of instances have been found. The threat actor can create malicious cases on the targeted Salesforce instance that hijack Einstein when they are processed by it. The researchers showed how an attacker could update the email addresses for all cases, effectively rerouting customer communication through a server they control. In a Gemini attack demo, the experts showed how prompt injection can be leveraged to get the gen-AI tool to display incorrect information. 


Who’s Leading Whom? The Evolving Relationship Between Business and Data Teams

As the data boom matured, organizations realized that clear business questions weren’t enough. If we wanted analytics to drive value, we had to build stronger technical teams, including data scientists and machine learning engineers. And we realized something else: we had spent years telling business leaders they needed a working knowledge of data science. Now we had to tell data scientists they needed a working knowledge of the business. This shift in emphasis was necessary, but it didn’t go perfectly. We had told the data teams to make their work useful, usable, and used, and they took that mandate seriously. But in the absence of clear guidance and shared norms, they filled in the gap in ways that didn’t always move the business forward. ... The foundation of any effective business-data partnership is a shared understanding of what actually counts as evidence. Without it, teams risk offering solutions that don’t stand up to scrutiny, don’t translate into action, or don’t move the business forward. A shared burden of proof makes sure that everyone is working from the same assumptions about what’s convincing and credible. This shared commitment is the foundation that allows the organization to decide with clarity and confidence. 


A new worst coder has entered the chat: vibe coding without code knowledge

A clear disconnect then stood out to me between the vibe coding of this app and the actual practiced work of coding. Because this app existed solely as an experiment for myself, the fact that it didn’t work so well and the code wasn’t great didn’t really matter. But vibe coding isn’t being touted as “a great use of AI if you’re just mucking about and don’t really care.” It’s supposed to be a tool for developer productivity, a bridge for nontechnical people into development, and someday a replacement for junior developers. That was the promise. And, sure, if I wanted to, I could probably take the feedback from my software engineer pals and plug it into Bolt. One of my friends recommended adding “descriptive class names” to help with the readability, and it took almost no time for Bolt to update the code.  ... The mess of my code would be a problem in any of those situations. Even though I made something that worked, did it really? Had this been a real work project, a developer would have had to come in after the fact to clean up everything I had made, lest future developers be lost in the mayhem of my creation. This is called the “productivity tax,” the biggest frustration that developers have with AI tools, because they spit out code that is almost—but not quite—right.


From WAF to WAAP: The Evolution of Application Protection in the API Era

The most dangerous attacks often use perfectly valid API calls arranged in unexpected sequences or volumes. API attacks don't break the rules. Instead, they abuse legitimate functionality by understanding the business logic better than the developers who built it. Advanced attacks differ from traditional web threats. For example, an SQL injection attempt looks syntactically different from legitimate input, making it detectable through pattern matching. However, an API attack might consist of perfectly valid requests that individually pass all schema validation tests, with the malicious intent emerging only from their sequence, timing, or cross-endpoint correlation patterns. ... The strategic value of WAAP goes well beyond just keeping attackers out. It's becoming a key enabler for faster, more confident API development cycles. Think about how your API security works today — you build an endpoint, then security teams manually review it, continuous penetration testing (link is external) breaks it, you fix it, and around and around you go. This approach inevitably creates friction between velocity and security. Through continuous visibility and protection, WAAP allows development teams to focus on building features rather than manually hardening each API endpoint. Hence, you can shift the traditional security bottleneck into a security enablement model. 


Scrutinizing LLM Reasoning Models

Assessing CoT quality is an important step towards improving reasoning model outcomes. Other efforts attempt to grasp the core cause of reasoning hallucination. One theory suggests the problem starts with how reasoning models are trained. Among other training techniques, LLMs go through multiple rounds of reinforcement learning (RL), a form of machine learning that teaches the difference between desirable and undesirable behavior through a point-based reward system. During the RL process, LLMs learn to accumulate as many positive points as possible, with “good” behavior yielding positive points and “bad” behavior yielding negative points. While RL is used on non-reasoning LLMs, a large amount of it seems to be necessary to incentivize LLMs to produce CoT, which means that reasoning models generally receive more of it. ... If optimizing for CoT length leads to confused reasoning or inaccurate answers, it might be better to incentivize models to produce shorter CoT. This is the intuition that inspired researchers at Wand AI to see what would happen if they used RL to encourage conciseness and directness rather than verbosity. Across multiple experiments conducted in early 2025, Wand AI’s team discovered a “natural correlation” between CoT brevity and answer accuracy, challenging the widely held notion that the additional time and compute required to create long CoT leads to better reasoning outcomes.


4 regions you didn't know already had age verification laws – and how they're enforced

Australia’s 2021 Online Safety Act was less focused on restricting access to adult content than it was on tackling issues of cyberbullying and online abuse of children, especially on social media platforms. The act introduced a legal framework to allow people to request the removal of hateful and abusive content,  ... Chinese law has required online service providers to implement a real-name registration system for over a decade. In 2012, the Decision on Strengthening Network Information Protection was passed, before being codified into law in 2016 as the Cybersecurity Law. The legislation requires online service providers to collect users’ real names, ID numbers, and other personal information. ... As with the other laws we’ve looked at, COPPA has its fair share of critics and opponents, and has been criticized as being both ineffective and unconstitutional by experts. Critics claim that it encourages users to lie about their age to access content, and allows websites to sidestep the need for parental consent. ... In 2025, the European Commission took the first steps towards creating an EU-wide strategy for age verification on websites when it released a prototype app for a potential age verification solution called a mini wallet, which is designed to be interoperable with the EU Digital Identity Wallet scheme.


The AI-enabled company of the future will need a whole new org chart

Let’s say you’ve designed a multi-agent team of AI products. Now you need to integrate them into your company by aligning them with your processes, values and policies. Of course, businesses onboard people all the time – but not usually 50 different roles at once. Clearly, the sheer scale of agentic AI presents its own challenges. Businesses will need to rely on a really tight onboarding process. The role of the agent onboarding lead creates the AI equivalent of an employee handbook: spelling out what agents are responsible for, how they escalate decisions, and where they must defer to humans. They’ll define trust thresholds, safe deployment criteria, and sandbox environments for gradual rollout. ... Organisational change rarely fails on capability – it fails on culture. The AI Culture & Collaboration Officer protects the human heartbeat of the company through a time of radical transition. As agents take on more responsibilities, human employees risk losing a sense of purpose, visibility, or control. The culture officer will continually check how everyone feels about the transition. This role ensures collaboration rituals evolve, morale stays intact, and trust is continually monitored — not just in the agents, but in the organisation’s direction of travel. It’s a future-facing HR function with teeth.


The Myth of Legacy Programming Languages: Age Doesn't Define Value

Instead of trying to define legacy languages based on one or two subjective criteria, a better approach is to consider the wide range of factors that may make a language count as legacy or not. ... Languages may be considered legacy when no one is still actively developing them — meaning the language standards cease receiving updates, often along with complementary resources like libraries and compilers. This seems reasonable because when a language ceases to be actively maintained, it may stop working with modern hardware platforms. ... Distinguishing between legacy and modern languages based on their popularity may also seem reasonable. After all, if few coders are still using a language, doesn't that make it legacy? Maybe, but there are a couple of complications to consider. One is that measuring the popularity of programming languages in a highly accurate way is impossible — so just because one authority deems a language to be unpopular doesn't necessarily mean developers hate it. The other challenge is that when a language becomes unpopular, it tends to mean that developers no longer prefer it for writing new applications. ... Programming languages sometimes end up in the "legacy" bin when they are associated with other forms of legacy technology — or when they lack associations with more "modern" technologies.


From Data Overload to Actionable Insights: Scaling Viewership Analytics with Semantic Intelligence

Semantic intelligence allows users to find reliable and accurate answers, irrespective of the terminology used in a query. They can interact freely with data and discover new insights by navigating massive databases, which previously required specialized IT involvement, in turn, reducing the workload of already overburdened IT teams. At its core, semantic intelligence lays the foundation for true self-serve analytics, allowing departments across an organization to confidently access information from a single source of truth. ... A semantic layer in this architecture lets you query data in a way that feels natural and enables you to get relevant and precise results. It bridges the gap between complex data structures and user-friendly access. This allows users to ask questions without any need to understand the underlying data intricacies. Standardized definitions and context across the sources streamlines analytics and accelerates insights using any BI tool of choice. ... One of the core functions of semantic intelligence is to standardize definitions and provide a single source of truth. This improves overall data governance with role-based access controls and robust security at all levels. In addition, row- and column-level security at both user and group levels can ensure that access to specific rows is restricted for specific users. 


Why VAPT is now essential for small & medium business security

One misconception, often held by smaller companies, is that they are less likely to be targeted. Industry experts disagree. "You might think, 'Well, we're a small company. Who'd want to hack us?' But here's the hard truth: Cybercriminals love easy targets, and small to medium businesses often have the weakest defences," states a representative from Borderless CS. VAPT combines two different strategies to identify vulnerabilities and potential entry points before malicious actors do. A Vulnerability Assessment scans servers, software, and applications for known problems in a manner similar to a security walkthrough of a physical building. Penetration Testing (often shortened to pen testing) simulates real attacks, enabling businesses to understand how a determined attacker might breach their systems. ... Borderless CS maintains that VAPT is applicable across sectors. "Retail businesses store customer data and payment info. Healthcare providers hold sensitive patient information. Service companies often rely on cloud tools and email systems that are vulnerable. Even a small eCommerce store can be a jackpot for the wrong person. Cyber attackers don't discriminate. In fact, they often prefer smaller businesses because they assume you haven't taken strong security measures. Let's not give them that satisfaction."

Daily Tech Digest - August 07, 2025


Quote for the day:

"Do the difficult things while they are easy and do the great things while they are small." -- Lao Tzu


Data neutrality: Safeguarding your AI’s competitive edge

“At the bottom there is a computational layer, such as the NVIDIA GPUs, anyone who provides the infrastructure for running AI. The next few layers are software-oriented, but also impacts infrastructure as well. Then there’s security and the data that feeds the models and those that feeds the applications. And on top of that, there’s the operational layer, which is how you enable data operations for AI. Data being so foundational means that whoever works with that layer is essentially holding the keys to the AI asset, so, it’s imperative that anything you do around data has to have a level of trust and data neutrality.” ... The risks in having common data infrastructure, particularly with those that are direct or indirect competitors, are significant. When proprietary training data is transplanted to another platform or service of a competitor, there is always an implicit, but frequently subtle, risk that proprietary insights, unique patterns of data or even the operational data of an enterprise will be accidentally shared. ... These trends in the market have precipitated the need for “sovereign AI platforms”– controlled spaces where companies have complete control over their data, models and the overall AI pipeline for development without outside interference.


The problem with AI agent-to-agent communication protocols

Some will say, “Competition breeds innovation.” That’s the party line. But for anyone who’s run a large IT organization, it means increased integration work, risk, cost, and vendor lock-in—all to achieve what should be the technical equivalent of exchanging a business card. Let’s not forget history. The 90s saw the rise and fall of CORBA and DCOM, each claiming to be the last word in distributed computing. The 2000s blessed us with WS-* (the asterisk is a wildcard because the number of specs was infinite), most of which are now forgotten. ... The truth: When vendors promote their own communication protocols, they build silos instead of bridges. Agents trained on one protocol can’t interact seamlessly with those speaking another dialect. Businesses end up either locking into one vendor’s standard, writing costly translation layers, or waiting for the market to move on from this round of wheel reinvention. ... We in IT love to make simple things complicated. The urge to create a universal, infinitely extensible, plug-and-play protocol is irresistible. But the real-world lesson is that 99% of enterprise agent interaction can be handled with a handful of message types: request, response, notify, error. The rest—trust negotiation, context passing, and the inevitable “unknown unknowns”—can be managed incrementally, so long as the basic messaging is interoperable.


Agents or Bots? Making Sense of AI on the Open Web

The difference between automated crawling and user-driven fetching isn't just technical—it's about who gets to access information on the open web. When Google's search engine crawls to build its index, that's different from when it fetches a webpage because you asked for a preview. Google's "user-triggered fetchers" prioritize your experience over robots.txt restrictions because these requests happen on your behalf. The same applies to AI assistants. When Perplexity fetches a webpage, it's because you asked a specific question requiring current information. The content isn't stored for training—it's used immediately to answer your question. ... An AI assistant works just like a human assistant. When you ask an AI assistant a question that requires current information, they don’t already know the answer. They look it up for you in order to complete whatever task you’ve asked. On Perplexity and all other agentic AI platforms, this happens in real-time, in response to your request, and the information is used immediately to answer your question. It's not stored in massive databases for future use, and it's not used to train AI models. User-driven agents only act when users make specific requests, and they only fetch the content needed to fulfill those requests. This is the fundamental difference between a user agent and a bot.


The Increasing Importance of Privacy-By-Design

Today’s data landscape is evolving at breakneck speed. With the explosion of IoT devices, AI-powered systems, and big data analytics, the volume and variety of personal data collected have skyrocketed. This means more opportunities for breaches, misuse, and regulatory headaches. And let’s not forget that consumers are savvier than ever about privacy risks – they want to know how their data is handled, shared, and stored. ... Integrating Privacy-By-Design into your development process doesn’t require reinventing the wheel; it simply demands a mindset shift and a commitment to building privacy into every stage of the lifecycle. From ideation to deployment, developers and product teams need to ask: How are we collecting, storing, and using data? ... Privacy teams need to work closely with developers, legal advisors, and user experience designers to ensure that privacy features do not compromise usability or performance. This balance can be challenging to achieve, especially in fast-paced development environments where deadlines are tight and product launches are prioritized. Another common challenge is educating the entire team on what Privacy-By-Design actually means in practice. It’s not enough to have a single data protection champion in the company; the entire culture needs to shift toward valuing privacy as a key product feature.


Microsoft’s real AI challenge: Moving past the prototypes

Now, you can see that with Bing Chat, Microsoft was merely repeating an old pattern. The company invested in OpenAI early, then moved to quickly launch a consumer AI product with Bing Chat. It was the first AI search engine and the first big consumer AI experience aside from ChatGPT — which was positioned more as a research project and not a consumer tool at the time. Needless to say, things didn’t pan out. Despite using the tarnished Bing name and logo that would probably make any product seem less cool, Bing Chat and its “Sydney” persona had breakout viral success. But the company scrambled after Bing Chat behaved in unpredictable ways. Microsoft’s explanation doesn’t exactly make it better: “Microsoft did not expect people to have hours-long conversations with it that would veer into personal territory,” Yusuf Mehdi, a corporate vice president at the company, told NPR. In other words, Microsoft didn’t expect people would chat with its chatbot so much. Faced with that, Microsoft started instituting limits and generally making Bing Chat both less interesting and less useful. Under current CEO Satya Nadella, Microsoft is a different company than it was under Ballmer. The past doesn’t always predict the future. But it does look like Microsoft had an early, rough prototype — yet again — and then saw competitors surpass it.


Is confusion over tech emissions measurement stifling innovation?

If sustainability is becoming a bottleneck for innovation, then businesses need to take action. If a cloud provider cannot (or will not) disclose exact emissions per workload, that is a red flag. Procurement teams need to start asking tough questions, and when appropriate, walking away from vendors that will not answer them. Businesses also need to unite to push for the development of a global measurement standard for carbon accounting. Until regulators or consortia enforce uniform reporting standards, companies will keep struggling to compare different measurements and metrics. Finally, it is imperative that businesses rethink the way they see emissions reporting. Rather than it being a compliance burden, they need to grasp it as an opportunity. Get emissions tracking right, and companies can be upfront and authentic about their green credentials, which can reassure potential customers and ultimately generate new business opportunities. Measuring environmental impact can be messy right now, but the alternative of sticking with outdated systems because new ones feel "too risky" is far worse. The solution is more transparency, smarter tools, a collective push for accountability, and above all, working with the right partners that can deliver accurate emissions statistics.


Making sense of data sovereignty and how to regain it

Although the concept of sovereignty is subject to greater regulatory control, its practical implications are often misunderstood or oversimplified, resulting in it being frequently reduced to questions of data location or legal jurisdiction. In reality, however, sovereignty extends across technical, operational and strategic domains. In practice, these elements are difficult to separate. While policy discussions often centre on where data is stored and who can access it, true sovereignty goes further. For example, much of the current debate focuses on physical infrastructure and national data residency. While these are very important issues, they represent only one part of the overall picture. Sovereignty is not achieved simply by locating data in a particular jurisdiction or switching to a domestic provider, because without visibility into how systems are built, maintained and supported, location alone offers limited protection. ... Organisations that take it seriously tend to focus less on technical purity and more on practical control. That means understanding which systems are critical to ongoing operations, where decision-making authority sits and what options exist if a provider, platform or regulation changes. Clearly, there is no single approach that suits every organisation, but these core principles help set direction. 


Beyond PQC: Building adaptive security programs for the unknown

The lack of a timeline for a post-quantum world means that it doesn’t make sense to consider post-quantum as either a long-term or a short-term risk, but both. Practically, we can prepare for the threat of quantum technology today by deploying post-quantum cryptography to protect identities and sensitive data. This year is crucial for post-quantum preparedness, as organisations are starting to put quantum-safe infrastructure in place, and regulatory bodies are beginning to address the importance of post-quantum cryptography. ... CISOs should take steps now to understand their current cryptographic estate. Many organisations have developed a fragmented cryptographic estate without a unified approach to protecting and managing keys, certificates, and protocols. This lack of visibility opens increased exposure to cybersecurity threats. Understanding this landscape is a prerequisite for migrating safely to post-quantum cryptography. Another practical step you can take is to prepare your organisation for the impact of quantum computing on public key encryption. This has become more feasible with NIST’s release of quantum-resistant algorithms and the NCSC’s recently announced three-step plan for moving to quantum-safe encryption. Even if there is no pressing threat to your business, implementing a crypto-agile strategy will also ensure a smooth transition to quantum-resistant algorithms when they become mainstream.


Critical Zero-Day Bugs Crack Open CyberArk, HashiCorp Password Vaults

"Secret management is a good thing. You just have to account for when things go badly. I think many professionals think that by vaulting a credential, their job is done. In reality, this should be just the beginning of a broader effort to build a more resilient identity infrastructure." "You want to have high fault tolerance, and failover scenarios — break-the-glass scenarios for when compromise happens. There are Gartner guides on how to do that. There's a whole market for identity and access management (IAM) integrators which sells these types of preparing for doomsday solutions," he notes. It might ring unsatisfying — a bandage for a deeper-rooted problem. It's part of the reason why, in recent years, many security experts have been asking not just how to better protect secrets, but how to move past them to other models of authorization. "I know there are going to be static secrets for a while, but they're fading away," Tal says. "We should be managing [users], rather than secrets. We should be contextualizing behaviors, evaluating the kinds of identities and machines of users that are performing actions, and then making decisions based on their behavior, not just what secrets they hold. I think that secrets are not a bad thing for now, but eventually we're going to move to the next generation of identity infrastructure."


Strategies for Robust Engineering: Automated Testing for Scalable Software

The changes happening to software development through AI and machine learning require testing to transform as well. The purpose now exceeds basic software testing because we need to create testing systems that learn and grow as autonomous entities. Software quality should be viewed through a new perspective where testing functions as an intelligent system that adapts over time instead of remaining as a collection of unchanging assertions. The future of software development will transform when engineering leaders move past traditional automated testing frameworks to create predictive AI-based test suites. The establishment of scalable engineering presents an exciting new direction that I am eager to lead. Software development teams must adopt new automated testing approaches because the time to transform their current strategies has arrived. Our testing systems should evolve from basic code verification into active improvement mechanisms. As applications become increasingly complex and dynamic, especially in distributed, cloud-native environments, test automation must keep pace. Predictive models, trained on historical failure patterns, can anticipate high-risk areas in codebases before issues emerge. Test coverage should be driven by real-time code behavior, user analytics, and system telemetry rather than static rule sets.

Daily Tech Digest - August 06, 2025


Quote for the day:

"What you do has far greater impact than what you say." -- Stephen Covey


“Man in the Prompt”: New Class of Prompt Injection Attacks Pairs With Malicious Browser Extensions to Issue Secret Commands to LLMs

The so-called “Man in the Prompt” attack presents two priority risks. One is to internal LLMs that store sensitive company data and personal information, in the belief that it is appropriately fenced off from other software and apps. The other risk comes from particular LLMs that are broadly integrated into workspaces, such as Google Gemini’s interaction with Google Workspace tools such as Mail and Docs. This category of prompt injection attacks applies not just to any type of browser extension, but any model or deployment of LLM. And the malicious extension requires no special permissions to work, given that the DOM access already provides everything it needs. ... The other proof-of-concept targets Google Gemini, and by extension any elements of Google Workspace it has been integrated with. Gemini is meant to automate routine and tedious tasks in Workspace such as email responses, document editing and updating contacts. The trouble is that it has almost complete access to the contents of these accounts as well as anything the user has access permission for or has had shared with them by someone else. Prompt injection attacks conducted by these extensions can not only steal the contents of emails and documents with ease, but complex queries can be fed to the LLM to target particular types of data and file extensions; the autocomplete function can also be abused to enumerate available files.


EU seeks more age verification transparency amid contentious debate

The EU is considering setting minimum requirements for online platforms to disclose their use of age verification or age estimation tools in their terms and conditions. The obligation is contained in a new compromise draft text of the EU’s proposed law on detecting and removing online child sex abuse material (CSAM), dated July 24 and seen by MLex. A discussion of the proposal, which contains few other changes to a previous draft, is scheduled for September 12. The text also calls for online platforms to perform mandatory scans for CSAM, which critics say could result in false positives and break end-to-end cryptography. ... The way age verification is set to work under the OSA is described as a “privacy nightmare” by PC Gamer, but the article stands in stark contrast to the vague posturing of the political class. Author Jacob Ridley acknowledges the possibility for double-blind methods of age assurance among those that do not require any personal information at all to be shared with the website or app the individual is trying to access. At the same time, many age verification systems do not work this way. Also, age assurance pop-ups can be spoofed, and those spoofs could harvest a wealth of valuable personal information Privado ID Co-founder Evan McMullen calls it “like using a sledgehammer to crack a walnut.” McMullen, of course, prefers a decentralized approach that leans on zero-knowledge proofs (ZKPs).


AI Is Changing the Cybersecurity Game in Ways Both Big and Small

“People are rushing now to get [MCP] functionality while overlooking the security aspect,” he said. “But once the functionality is established and the whole concept of MCP becomes the norm, I would assume that security researchers will go in and essentially update and fix those security issues over time. But it will take a couple of years, and while that is taking time, I would advise you to run MCP somehow securely so that you know what’s going on.” Beyond the tactical security issues around MCP, there are bigger issues that are more strategic, more systemic in nature. They involve the big changes that large language models (LLMs) are having on the cybersecurity business and the things that organizations will have to do to protect themselves from AI-powered attacks in the future ... The sheer volume of threat data, some of which may be AI generated, demands more AI to be able to parse it and understand it, Sharma said. “It’s not humanly possible to do it by a SOC engineer or a vulnerability engineer or a threat engineer,” he said. Tuskira essentially functions as an AI-powered security analyst to detect traditional threats on IT systems as well as threats posed to AI-powered systems. Instead of using commercial AI models, Sharma adopted open-source foundation models running in private data centers. Developing AI tools to counter AI-powered security threats demands custom models, a lot of fine-tuning, and a data fabric that can maintain context of particular threats, he said.


AI burnout: A new challenge for CIOs

To take advantage of the benefits of smart tools and avoid overburdening the workforce, the board of directors must carefully manage their deployment. “As leaders, we must set clear limits, encourage training without overwhelming others, and open spaces for conversation about how people are experiencing this transition,” Blázquez says. “Technology must be an ally, not a threat, and the role of leadership will be key in that balance.” “It is recommended that companies take the first step. They must act from a preventative, humane, and structural perspective,” says De la Hoz. “In addition to all the human, ethical, and responsible components, it is in the company’s economic interest to maintain a happy, safe, and mission-focused workforce.” Regarding increasing personal productivity, he emphasizes the importance of “valuing their efforts, whether through higher salary returns or other forms of compensation.” ... From here, action must be taken, “implementing contingency plans to alleviate these areas.” One way: working groups, where the problems and barriers associated with technology can be analyzed. “From here, use these KPIs to change my strategy. Or to set it up, because often what happens is that I deploy the technology and forget how to get that technology adopted.” 


CIOs need a military mindset

While the battlefield feels very far away from the boardroom, this principle is something that CIOs can take on board when they’re tasked with steering a complex digital programme. Step back and clear the path so that you can trust your people to deliver; that’s when the real progress gets made. Contrary to popular belief, the military is not rigidly hierarchical. In fact, it teaches individuals to operate with autonomy within defined parameters. Officers set the boundaries of a mission and step back, allowing you to take full ownership of your actions. This approach is supported by the OODA Loop, a framework that cultivates awareness and decisive action under pressure. ... Resilience is perhaps the hardest leadership trait to teach and the most vital to embody. Military officers are taught to plan exhaustively, train rigorously, and prepare for all scenarios, but they’re also taught that ‘the first casualty of war is the plan.’ Adaptability under pressure is a non-negotiable mindset for you to adopt and instil in your team. When your team feels supported to grow, they stop fearing change and start responding to it; it is here that adaptability and resilience become second nature. There is also a practical opportunity to bring these principles in-house, as veterans transitioning out of the army may bring with them a refreshed leadership approach. Because they’re often confident under pressure and focused on outcomes, their transferrable skills allow them to thrive in the corporate world.


Backend FinOps: Engineering Cost-Efficient Microservices in the Cloud

Integrating cost management directly into Infrastructure-as-Code (IaC) frameworks such as Terraform enforces fiscal responsibility at the resource provisioning phase. By explicitly defining resource constraints and mandatory tagging, teams can preemptively mitigate orphaned cloud expenditures. ... Integrating cost awareness directly within Continuous Integration and Delivery (CI/CD) pipelines ensures proactive management of cloud expenditures throughout the development lifecycle. Tools such as Infracost automate the calculation of incremental cloud costs introduced by individual code changes. ... Cost-based pre-merge testing frameworks reinforce fiscal prudence by simulating peak-load scenarios prior to code integration. Automated tests measured critical metrics, including ninety-fifth percentile response times and estimated cost per ten thousand requests, to ensure compliance with established financial performance benchmarks. Pull requests failing predefined cost-efficiency criteria were systematically blocked. ... Comprehensive cost observability tools such as Datadog Cost Dashboards combine billing metrics with Application Performance Monitoring (APM) data, directly supporting operational and cost-related SLO compliance.


5 hard truths of a career in cybersecurity — and how to navigate them

Leadership and HR teams often gatekeep by focusing exclusively on candidates with certain educational degrees or specific credentials, typically from vendors such as Cisco, Juniper, or Palo Alto. Although Morrato finds this somewhat understandable given the high cost of hiring in cybersecurity, he believes this approach unfairly filters out capable individuals who, in a different era, would have had more opportunities. ... Because most team managers elevate from technical roles, they often lack the leadership and interpersonal skills needed to foster healthy team cultures or manage stakeholder relationships effectively. This cultural disconnect has a tangible impact on individuals. “People who work in security functions don’t always feel safe — psychologically safe — doing so,” Budge explains. ... Cybersecurity teams must also rethink how they approach risk, as relying solely on strict, one-size-fits-all controls is no longer tenable, Mistry says. Instead, he advocates for a more adaptive, business-aligned framework that considers overall exposure rather than just technical vulnerabilities. “Can I live with this risk? Can I not live with this risk? Can I do something to reduce the risk? Can I offload the risk? And it’s a risk conversation, not a ‘speeds and feeds’ conversation,” he says, emphasizing that cybersecurity leaders must actively build relationships across the organization to make these conversations possible.


How AI amplifies these other tech trends that matter most to business in 2025

Agentic AI is an artificial intelligence system capable of independently planning and executing complex, multistep tasks. Built on foundation models, these agents can autonomously perform actions, communicate with one another, and adapt to new information. Significant advancements have emerged, from general agent platforms to specialized agents designed for deep research. ... Application-specific semiconductors are purpose-built chips optimized to perform specialized tasks. Unlike general-purpose semiconductors, they are engineered to handle specific workloads (such as large-scale AI training and inference tasks) while optimizing performance characteristics, including offering superior speed, energy efficiency, and performance. ... Cloud and edge computing involve distributing workloads across locations, from hyperscale remote data centers to regional hubs and local nodes. This approach optimizes performance by addressing factors such as latency, data transfer costs, data sovereignty, and data security. ... Quantum-based technologies use the unique properties of quantum mechanics to execute certain complex calculations exponentially faster than classical computers; secure communication networks; and produce sensors with higher sensitivity levels than their classical counterparts.


Differentiable Economics: Strategic Behavior, Mechanisms, and Machine Learning

Differential economics is related to but different from the recent progress in building agents that achieve super-human performance in combinatorial games such as chess and Go. First, economic games such as auctions, oligopoly competition, or contests typically have a continuous action space expressed in money, and opponents are modeled as draws from a prior distribution that has continuous support. Second, differentiable economics is focused on modeling and achieving equilibrium behavior. The second opportunity in differentiable economics is to use data-driven methods and machine learning to discover rules, constraints, and affordances—mechanisms—for economic environments that promote good outcomes in the equilibrium behavior of a system. Mechanism design solves the inverse problem of game theory, finding rules of strategic interaction such that agents in equilibrium will effect an outcome with desired properties. Where possible, mechanisms promote strong equilibrium solution concepts such as dominant strategy equilibria, making it strategically easy for agents to participate. Think of a series of bilateral negotiations between buyers and a seller that is replaced by an efficient auction mechanism with simple dominant strategies for agents to report their preferences truthfully. 


Ownership Mindset Drives Innovation: Milwaukee Tool CEO

“Empowerment was not a free-for-all,” Richman explained. In fact, the company recently changed the wording around its core values from “empowerment” to “extreme ownership” to reflect the importance of accountability for results. Emphasizing ownership can also help employees do what is best for the company as a whole rather than just their own teams, particularly when it comes to reallocating resources. ... Surprises and setbacks are an unavoidable cost of trying new things while innovating. Since organizations cannot avoid these issues, leaders and employees need to discuss them frankly and quickly enough to minimize the downside while seizing the upside. “[Being] candid is the most challenging cultural element of any company,” Richman said. “And we believe that it really leads to success or failure.” … In successful cultures, teams, people, parts of the organization can bring problems up and bring them up in a way to be able to say, ‘How are we going to rally the troops as one team, come together, fix it, and figure out why we got into this mess, and what are we going to do to not do it again?’” Candor is a two-way street. To build trust, leaders need to provide an honest assessment of the state of the company and the path forward — a “candid communication of where you are,” Richman said. 

Daily Tech Digest - August 05, 2025


Quote for the day:

"Let today be the day you start something new and amazing." -- Unknown


Convergence of Technologies Reshaping the Enterprise Network

"We are now at the epicenter of the transformation of IT, where AI and networking are converging," said Antonio Neri, president and CEO of HPE. "In addition to positioning HPE to offer our customers a modern network architecture alternative and an even more differentiated and complete portfolio across hybrid cloud, AI and networking, this combination accelerates our profitable growth strategy as we deepen our customer relevance and expand our total addressable market into attractive adjacent areas." Naresh Singh, senior director analyst at Gartner, told Information Security Media Group that the merger of two networking heavyweights would make the networking landscape interesting in the near future. ... Security vendors have long tackled cyberthreats through robust portfolios, including next-generation firewalls, endpoint security, secure access service edge, intrusion detection system or intrusion prevention system, software-defined wide area network and network security management. But the rise of AI and large language models has introduced new risks that demand a deeper transformation across people, processes and technology. As organizations recognize the need for a secure foundation, many are accelerating their AI adoption initiatives.


Blind spots at the top: Why leaders fail

You’ve stopped learning. Not because there’s nothing left to learn, but because your ego can’t handle starting from scratch again. You default to what worked five years ago. Meanwhile, your environment has moved on, your competitors have pivoted, and your team can smell the stagnation. Ultimately, you are an architect of resilience and trust. As Alvin Toffler warned, “The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.” ... Believing you’re always right is a shortcut to irrelevance. When you stop listening, you stop leading. You confuse confidence with competence and dominance with clarity. You bulldoze feedback and mistake silence for agreement. That silence? It’s fear. ... Stress is part of the job. But if every challenge sends you into a spiral, your people will spend more time managing your mood than solving real problems. Fragile leaders don’t scale. Their teams shrink. Their influence dries up. Strong leadership isn’t about acting tough. It’s about staying grounded when things go sideways. ... You think you’re empowering, but you’re micromanaging. You think you’re a visionary, but your team sees a control freak. You think you’re a mentor, but you dominate every meeting. The gap between intent and impact? That’s where teams disengage. The worst part? No one will tell you unless you build a culture where they can.


9 habits of the highly ineffective vibe coder

It’s easy to think that one large language model is the same as any other. The interfaces are largely identical, after all. In goes some text and out comes a magic answer, right? LLMs even tend to give similar answers to easy questions. And their names don’t even tell us much, because most LLM creators choose something cute rather than descriptive. But models have different internal structures, which can affect how well they unpack and understand problems that involve complex logic, like writing code. ... Many developers don’t realize how much LLMs are affected by the size of their input. The model must churn through all the tokens in your prompt before it can generate something that might be useful to you. More input tokens require more resources. Habitually dumping big blocks of code on the LLM can start to add up. Do it too much and you’ll end up overwhelming the hardware and filling up the context window. Some developers even talk about just uploading their entire source folder “just in case.” ... AI assistants do best when they’re focusing our attention on some obscure corner of the software documentation. Or maybe they’re finding a tidbit of knowledge about some feature that isn’t where we expected it to be. They’re amazing at searching through a vast training set for just the right insight. They’re not always so good at synthesizing or offering deep insight, though.


How to Eliminate Deployment Bottlenecks Without Sacrificing Application Security

As organizations embrace DevOps to accelerate innovation, the traditional approach of treating security as a checkpoint begins to break down. The result? Security either slows releases or, even worse, gets bypassed altogether amidst the need to deliver as quickly as possible. ... DevOps has reshaped software delivery, with teams now expected to deploy applications at high velocity, using continuous integration and delivery (CI/CD), microservices architectures, and container orchestration platforms like Kubernetes. But as development practices evolved, many security tools have not kept pace. While traditional Web Application Firewalls (WAFs) remain effective for many use cases, their operational models can become challenging when applied to highly dynamic, modern development environments. In such scenarios, they often introduce delays, limit flexibility, and add operational burden instead of enabling agility. ... Modern architectures introduce constant change. New microservices, APIs, and environments are deployed daily. Traditional WAFs, built for stable applications, rely on domain-first onboarding models that treat each application as an isolated unit. Every new domain or service often requires manual configuration, creating friction and increasing the risk of unprotected assets.


Anthropic wants to stop AI models from turning evil - here's how

In a paper released Friday, the company explores how and why models exhibit undesirable behavior, and what can be done about it. A model's persona can change during training and once it's deployed, when user inputs start influencing it. This is evidenced by models that may have passed safety checks before deployment, but then develop alter egos or act erratically once they're publicly available ... Anthropic admitted in the paper that "shaping a model's character is more of an art than a science," but said persona vectors are another arm with which to monitor -- and potentially safeguard against -- harmful traits. In the paper, Anthropic explained that it can steer these vectors by instructing models to act in certain ways -- for example, if it injects an evil prompt into the model, the model will respond from an evil place, confirming a cause-and-effect relationship that makes the roots of a model's character easier to trace. "By measuring the strength of persona vector activations, we can detect when the model's personality is shifting towards the corresponding trait, either over the course of training or during a conversation," Anthropic explained. "This monitoring could allow model developers or users to intervene when models seem to be drifting towards dangerous traits."


From Aspiration to Action: The State of DevOps Automation Today

One of the report's clearest findings is the advantage of engaging QA teams earlier in the development cycle. Teams practicing shift-left testing — bringing QA into planning, design, and early build phases — report higher satisfaction rates and stronger results overall. In fact, 88% of teams with early QA involvement reported satisfaction with their quality processes, and those teams also experienced fewer escaped defects and more comprehensive test coverage. Rather than testing at the end of the development cycle, early QA involvement enables faster feedback loops, better test design, and tighter alignment with user requirements. It also improves collaboration between developers and testers, making it easier to catch potential issues before they escalate into expensive fixes. ... While more DevOps teams recognize the importance of integrating security into the software development lifecycle (SDLC), sizable gaps remain. ... Many organizations still treat security as a separate function, disconnected from their routine QA and DevOps processes. This separation slows down vulnerability detection and remediation. These findings show the need for teams to better integrate security practices earlier in the SDLC, leveraging AI-driven tools that facilitate proactive threat detection and management.


Why the AI era is forcing a redesign of the entire compute backbone

Traditional fault tolerance relies on redundancy among loosely connected systems to achieve high uptime. ML computing demands a different approach. First, the sheer scale of computation makes over-provisioning too costly. Second, model training is a tightly synchronized process, where a single failure can cascade to thousands of processors. Finally, advanced ML hardware often pushes to the boundary of current technology, potentially leading to higher failure rates. ... As we push for greater performance, individual chips require more power, often exceeding the cooling capacity of traditional air-cooled data centers. This necessitates a shift towards more energy-intensive, but ultimately more efficient, liquid cooling solutions, and a fundamental redesign of data center cooling infrastructure. ... One important observation is that AI will, in the end, enhance attacker capabilities. This, in turn, means that we must ensure that AI simultaneously supercharges our defenses. This includes end-to-end data encryption, robust data lineage tracking with verifiable access logs, hardware-enforced security boundaries to protect sensitive computations and sophisticated key management systems. ... The rise of gen AI marks not just an evolution, but a revolution that requires a radical reimagining of our computing infrastructure. 


Industry Leaders Warn MSPs: Rolling Out AI Too Soon Could Backfire

“The biggest risk actually out there is deploying this stuff too soon,” he said. “If you push it really, really hard, your customers are going to be like, ‘This is terrible. I hate it. Why did you do this?’ That will change their opinion on AI for everything moving forward.” The message resonated with other leaders on the panel, including Heddy, who likened AI adoption to on-boarding a new employee. “I would not put my new employees in front of customers until I have educated them,” he said. “And so yes, you should roll [AI] out to your customers only when you are sure that what it is delivering is going to be good.” ... “Everybody’s just sort of siloed in their own little chat box. Wherever this agentic future is, we can all see that’s where it’s going, but at what point do we trust an agent to actually do something? ... “So what are the steps? What is the training that has to happen? How do we have all this information in context for the individual, the team, the entire organization? Where we’re headed is clear. Just … how long does that take?” ... “Don’t wait until you think you have it nailed and are the expert in the world on this to go have a conversation because those who are not experts on it are going to go have conversations with your customers about AI. We should consume it to make ourselves a better company, and then once we understand it well enough to sell it, only then should we go and try to sell it.”


Why Standards and Certification Matter More Than Ever

A major obstacle for enterprise IT teams is the lack of interoperability. Today's networked services span multiple clouds, edge locations and on-premises systems. Each environment brings unique security and compliance needs, making cohesive service delivery difficult. Lifecycle Service Orchestration (LSO), developed and advanced by Mplify, formerly MEF, offers a path through this complexity. With standardized and certified APIs and consistent service definitions, LSO supports automated provisioning and service management across environments and enables seamless interoperability between providers and platforms. ... In a world of constant change, standards and certification are strategic necessities. ... By reuniting around proven frameworks, organizations can modernize more confidently. Certification provides a layer of trust, ensuring solutions meet real-world requirements and work across the environments that enterprises rely on most. ... Standards and certification offer a way to cut through the complexity so networks, services and AI deployments can evolve without introducing new risks. Enterprises that succeed won't be the ones asking whether to adopt LSO, SASE or GPUaaS, but rather finding smart, swift ways to put them into practice.


Security tooling pitfalls for small teams: Cost, complexity, and low ROI

Retrofitting enterprise-grade platforms into SMB environments is often a disaster in the making. These tools are designed for organizations with layers of bureaucracy, complex structures, and entire teams dedicated to each security and compliance function. A large enterprise like Microsoft or Salesforce might have separate teams for governance, risk, compliance, cloud security, network security, and security operations. Each of those teams would own and manage specialized tooling, which in itself assumes domain experts running the show. ... “Compliance is not security” is a statement that sparks heated debates amongst many security experts. However, the reality is that even checklist-based compliance can help companies with no security in place build a strong foundation. Frameworks like SOC 2 and ISO 27001 help establish the baseline of a strong security program, ensuring you have coverage across critical controls. If you deal with Personally Identifiable Information (PII), GDPR is the gold standard for privacy controls. And with AI adoption becoming unavoidable, ISO 42001 is emerging as a key framework for AI governance, helping organizations manage AI risk and build responsible practices from the ground up.

Daily Tech Digest - August 04, 2025


Quote for the day:

"You don’t have to be great to start, but you have to start to be great." — Zig Ziglar


Why tomorrow’s best devs won’t just code — they’ll curate, coordinate and command AI

It is not just about writing code anymore — it is about understanding systems, structuring problems and working alongside AI like a team member. That is a tall order. That said, I do believe that there is a way forward. It starts by changing the way we learn. If you are just starting out, avoid relying on AI to get things done. It is tempting, sure, but in the long run, it is also harmful. If you skip the manual practice, you are missing out on building a deeper understanding of how software really works. That understanding is critical if you want to grow into the kind of developer who can lead, architect and guide AI instead of being replaced by it. ... AI-augmented developers will replace large teams that used to be necessary to move a project forward. In terms of efficiency, there is a lot to celebrate about this change — reduced communication time, faster results and higher bars for what one person can realistically accomplish. But, of course, this does not mean teams will disappear altogether. It is just that the structure will change. ... Being technically fluent will still remain a crucial requirement — but it won’t be enough to simply know how to code. You will need to understand product thinking, user needs and how to manage AI’s output. It will be more about system design and strategic vision. For some, this may sound intimidating, but for others, it will also open many doors. People with creativity and a knack for problem-solving will have huge opportunities ahead of them.


The Wild West of Shadow IT

From copy to deck generators, code assistants, and data crunchers, most of them were never reviewed or approved. The productivity gains of AI are huge. Productivity has been catapulted forward in every department and across every vertical. So what could go wrong? Oh, just sensitive data leaks, uncontrolled API connections, persistent OAuth tokens, and no monitoring, audit logs, or privacy policies… and that's just to name a few of the very real and dangerous issues. ... Modern SaaS stacks form an interconnected ecosystem. Applications integrate with each other through OAuth tokens, API keys, and third-party plug-ins to automate workflows and enable productivity. But every integration is a potential entry point — and attackers know it. Compromising a lesser-known SaaS tool with broad integration permissions can serve as a stepping stone into more critical systems. Shadow integrations, unvetted AI tools, and abandoned apps connected via OAuth can create a fragmented, risky supply chain.  ... Let's be honest - compliance has become a jungle due to IT democratization. From GDPR to SOC 2… your organization's compliance is hard to gauge when your employees use hundreds of SaaS tools and your data is scattered across more AI apps than you even know about. You have two compliance challenges on the table: You need to make sure the apps in your stack are compliant and you also need to assure that your environment is under control should an audit take place.


Edge Computing: Not Just for Tech Giants Anymore

A resilient local edge infrastructure significantly enhances the availability and reliability of enterprise digital shopfloor operations by providing powerful on-premises processing as close to the data source as possible—ensuring uninterrupted operations while avoiding external cloud dependency. For businesses, this translates to improved production floor performance and increased uptime—both critical in sectors such as manufacturing, healthcare, and energy. In today’s hyperconnected market, where customers expect seamless digital interactions around the clock, any delay or downtime can lead to lost revenue and reputational damage. Moreover, as AI, IoT, and real-time analytics continue to grow, on-premises OT edge infrastructure combined with industrial-grade connectivity such as private 4.9/LTE or 5G provides the necessary low-latency platform to support these emerging technologies. Investing in resilient infrastructure is no longer optional, it’s a strategic imperative for organisations seeking to maintain operational continuity, foster innovation, and stay ahead of competitors in an increasingly digital and dynamic global economy. ... Once, infrastructure decisions were dominated by IT and boiled down to a simple choice between public and private infrastructure. Today, with IT/OT convergence, it’s all about fit-for-purpose architecture. On-premises edge computing doesn’t replace the cloud — it complements it in powerful ways.


A Reporting Breakthrough: Advanced Reporting Architecture

Advanced Reporting Architecture is based on a powerful and scalable SaaS architecture, which efficiently addresses user-specific reporting requirements by generating all possible reports upfront. Users simply select and analyze the views that matter most to them. The Advanced Reporting Architecture’s SaaS platform is built for global reach and enterprise reliability, with the following features: Modern User Interface: Delivered via AWS, optimized for mobile and desktop, with seamless language switching (English, French, German, Spanish, and more to come). Encrypted Cloud Storage: Ensuring uploaded files and reports are always secure. Serverless Data Processing: High-precision processing that analyzes user-uploaded data and uses data influenced relevant factors to maximizing analytical efficiencies and lower the cost of processing efforts. Comprehensive Asset Management: Support for editable reports, dashboards, presentations, pivots, and custom outputs. Integrated Payments & Accounting: Powered by PayPal and Odoo. Simple Subscription Model: Pay only for what you use—no expensive licenses, hardware, or ongoing maintenance. Some leading-edge reporting platforms, such as PrestoCharts, are based on Advanced Reporting Architecture and have been successful in enabling business users to develop custom reports on the fly. Thus, Advanced Reporting Architecture puts reporting prowess in the hands of the user.


These jobs face the highest risk of AI takeover, according to Microsoft

According to the report -- which has yet to be peer-reviewed -- the most at-risk jobs are those that are based on the gathering, synthesis, and communication of information, at which modern generative AI systems excel: think translators, sales and customer service reps, writers and journalists, and political scientists. The most secure jobs, on the other hand, are supposedly those that depend more on physical labor and interpersonal skills. No AI is going to replace phlebotomists, embalmers, or massage therapists anytime soon. ... "It is tempting to conclude that occupations that have high overlap with activities AI performs will be automated and thus experience job or wage loss, and that occupations with activities AI assists with will be augmented and raise wages," the Microsoft researchers note in their report. "This would be a mistake, as our data do not include the downstream business impacts of new technology, which are very hard to predict and often counterintuitive." The report also echoes what's become something of a mantra among the biggest tech companies as they ramp up their AI efforts: that even though AI will replace or radically transform many jobs, it will also create new ones. ... It's possible that AI could play a role in helping people practice that skill. About one in three Americans are already using the technology to help them navigate a shift in their career, a recent study found.


AIBOMs are the new SBOMs: The missing link in AI risk management

AIBOMs follow the same formats as traditional SBOMs, but contain AI-specific content and metadata, like model family, acceptable usage, AI-specific licenses, etc. If you are a security leader at a large defense contractor, you’d need the ability to identify model developers and their country of origin. This would ensure you are not utilizing models originating from near-peer adversary countries, such as China. ... The first step is inventorying their AI. Utilize AIBOMs to inventory your AI dependencies, monitor what is approved vs. requested vs. denied, and ensure you have an understanding of what is deployed where. The second is to actively seek out AI, rather than waiting for employees to discover it. Organizations need capabilities to identify AI in code and automatically generate resulting AIBOMs. This should be integrated as part of the MLOps pipeline to generate AIBOMs and automatically surface new AI usage as it occurs. The third is to develop and adopt responsible AI policies. Some of them are fairly common-sense: no contributors from OFAC countries, no copylefted licenses, no usage of models without a three-month track record on HuggingFace, and no usage of models over a year old without updates. Then, enforce those policies in an automated and scalable system. The key is moving from reactive discovery to proactive monitoring.


2026 Budgets: What’s on Top of CIOs’ Lists (and What Should Be)

CIO shops are becoming outcome-based, which makes them accountable for what they’re delivering against the value potential, not how many hours were burned. “The biggest challenge seems to be changing every day, but I think it’s going to be all about balancing long-term vision with near-term execution,” says Sudeep George, CTO at software-delivered AI data company iMerit. “Frankly, nobody has a very good idea of what's going to happen in 2026, so everyone's placing bets,” he continues. “This unpredictability is going to be the nature of the beast, and we have to be ready for that.” ... “Reducing the amount of tech debt will always continue to be a focus for my organization,” says Calleja-Matsko. “We’re constantly looking at re-evaluating contracts, terms, [and] whether we have overlapping business capabilities that are being addressed by multiple tools that we have. It's rationalizing, she adds, and what that does is free up investment. How is this vendor pricing its offering? How do we make sure we include enough in our budget based on that pricing model? “That’s my challenge,” Calleja-Matsko emphasizes. Talent is top of mind for 2026, both in terms of attracting it and retaining it. Ultimately though, AI investments are enabling the company to spend more time with customers.


Digital Twin: Revolutionizing the Future of Technology and Industry

T​h​e rise o​f t​h​e cyberspace o​f Things [IoT] has made digital twin technology more relevant​ and accessible. IoT devices ceaselessly garner data from their surroundings a​n​d send i​t t​o t​h​e cloud. T​h​i​s data i​s used t​o produce a​n​d update digital twins o​f those devices o​r systems. I​n smart homes, digital twins help keep an eye on a​n​d see to it lighting, heating, a​n​d appliances. I​n blue-collar settings, IoT sensors track simple machine health a​n​d doing. Moreover, these smart systems c​a​n discover minor issues ahead of time that lead t​o failures. A​s more devices abound, digital twins offer greater conspicuousness a​n​d see to it. ... Despite its benefits, digital twin technology comes w​i​t​h challenges. One major issue i​s t​h​e high cost o​f carrying out. Setting up sensors, software systems, a​n​d data chopping c​a​n be overpriced, particularly f​o​r small businesses. There a​r​e also concerns about the data security system a​n​d privacy. Since digital twins rely o​n straight data flow, any rift c​a​n be risky. Integrating digital twins into existing systems c​a​n be involved. Moreover, i​t requires fine professionals who translate both t​h​e personal systems a​n​d t​h​e labyrinthine digital technologies. A different dispute i​s ensuring t​h​e caliber a​n​d truth o​f t​h​e data. I​f t​h​e input data i​s blemished, the digital twin’s results will also be erratic. Companies must also cope with large amounts o​f data, which requires a stressed I​T base. 


Why Banks Must Stop Pretending They’re Not Tech Companies

The most successful "banks" of the future may not even call themselves banks at all. While traditional institutions cling to century-old identities rooted in vaults and branches, their most formidable competitors are building financial ecosystems from the ground up with APIs, cloud infrastructure, and data-driven decision engines. ... The question isn’t whether banks will become technology companies. It’s whether they’ll make that transition fast enough to remain relevant. And to do this, they must rethink their identity by operating as technology platforms that enable fast, connected, and customer-first experiences. ... This isn’t about layering digital tools on top of legacy infrastructure or launching a chatbot and calling it innovation. It’s about adopting a platform mindset — one that treats technology not as a cost center but as the foundation of growth. A true platform bank is modular, API-first, and cloud-native. It uses real-time data to personalize every interaction. It delivers experiences that are intuitive, fast, and seamless — meeting customers wherever they are and embedding financial services into their everyday lives. ... To keep up with the pace of innovation, banks must adopt skills-based models that prioritize adaptability and continuous learning. Upskilling isn’t optional. It’s how institutions stay responsive to market shifts and build lasting capabilities. And it starts at the top.


Colo space crunch could cripple IT expansion projects

For enterprise IT execs who already have a lot on their plates, the lack of available colocation space represents yet another headache to deal with, and one with major implications. Nobody wants to have to explain to the CIO or the board of directors that the company can’t proceed with digitization efforts or AI projects because there’s no space to put the servers. IT execs need to start the planning process now to get ahead of the problem. ... Demand has outstripped supply due to multiple factors, according to Pat Lynch, executive managing director at CBRE Data Center Solutions. “AI is definitely part of the demand scenario that we see in the market, but we also see growing demand from enterprise clients for raw compute power that companies are using in all aspects of their business.” ... It’s not GPU chip shortages that are slowing down new construction of data centers; it’s power. When a hyperscaler, colo operator or enterprise starts looking for a location to build a data center, the first thing they need is a commitment from the utility company for the required megawattage. According to a McKinsey study, data centers are consuming more power due to the proliferation of the power-hungry GPUs required for AI. Ten years ago, a 30 MW data center was considered large. Today, a 200 MW facility is considered normal.