Daily Tech Digest - July 05, 2025


Quote for the day:

“Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through.” -- Jarod Kintz


The Hidden Data Cost: Why Developer Soft Skills Matter More Than You Think

The logic is simple but under-discussed: developers who struggle to communicate with product owners, translate goals into architecture, or anticipate system-wide tradeoffs are more likely to build the wrong thing, need more rework, or get stuck in cycles of iteration that waste time and resources. These are not theoretical risks, they’re quantifiable cost drivers. According to Lumenalta’s findings, organizations that invest in well-rounded senior developers, including soft skill development, see fewer errors, faster time to delivery, and stronger alignment between technical execution and business value. ... The irony? Most organizations already have technically proficient talent in-house. What they lack is the environment to develop those skills that drive high-impact outcomes. Senior developers who think like “chess masters”—a term Lumenalta uses for those who anticipate several moves ahead—can drastically reduce a project’s TCO by mentoring junior talent, catching architecture risks early, and building systems that adapt rather than break under pressure. ... As AI reshapes every layer of tech, developers who can bridge business goals and algorithmic capabilities will become increasingly valuable. It’s not just about knowing how to fine-tune a model, it’s about knowing when not to.


Why AV is an overlooked cybersecurity risk

As cyber attackers become more sophisticated, they’re shifting their attention to overlooked entry points like AV infrastructure. A good example is YouTuber Jim Browning’s infiltration of a scam call center, where he used unsecured CCTV systems to monitor and expose criminals in real time. This highlights the potential for AV vulnerabilities to be exploited for intelligence gathering. To counter these risks, organizations must adopt a more proactive approach. Simulated social engineering and phishing attacks can help assess user awareness and expose vulnerabilities in behavior. These simulations should be backed by ongoing training that equips staff to recognize manipulation tactics and understand the value of security hygiene. ... To mitigate the risks posed by vulnerable AV systems, organizations should take a proactive and layered approach to security. This includes regularly updating device firmware and underlying software packages, which are often left outdated even when new versions are available. Strong password policies should be enforced, particularly on devices running webservers, with security practices aligned to standards like the OWASP Top 10. Physical access to AV infrastructure must also be tightly controlled to prevent unauthorized LAN connections. 


EU Presses for Quantum-Safe Encryption by 2030 as Risks Grow

The push comes amid growing concern about the long-term viability of conventional encryption techniques. Current security protocols rely on complex mathematical problems — such as factoring large numbers — that would take today’s classical computers thousands of years to solve. But quantum computers could potentially crack these systems in a fraction of the time, opening the door to what cybersecurity experts refer to as “store now, decrypt later” attacks. In these attacks, hackers collect encrypted data today with the intention of breaking the encryption once quantum technology matures. Germany’s Federal Office for Information Security (BSI) estimates that conventional encryption could remain secure for another 10 to 20 years in the absence of sudden breakthroughs, The Munich Eye reports. Europol has echoed that forecast, suggesting a 15-year window before current systems might be compromised. While the timeline is uncertain, European authorities agree that proactive planning is essential. PQC is designed to resist attacks from both classical and quantum computers by using algorithms based on different kinds of hard mathematical problems. These newer algorithms are more complex and require different computational strategies than those used in today’s standards like RSA and ECC. 


MongoDB Doubles Down on India's Database Boom

Chawla says MongoDB is helping Indian enterprises move beyond legacy systems through two distinct approaches. "The first one is when customers decide to build a completely new modern application, gradually sunsetting the old legacy application," he explains. "We work closely with them to build these modern systems." ... Despite this fast-paced growth, Chawla points out several lingering myths in India. "A lot of customers still haven't realised that if you want to build a modern application especially one that's AI-driven you can't build it on a relational structure," he explains. "Most of the data today is unstructured and messy. So you need a database that can scale, can handle different types of data, and support modern workloads." ... Even those trying to move away from traditional databases often fall into the trap of viewing PostgreSQL as a modern alternative. "PostgreSQL is still relational in nature. It has the same row-and-column limitations and scalability issues." He also adds that if companies want to build a future-proof application especially one that infuses AI capabilities they need something that can handle all data types and offers native support for features like full-text search, hybrid search, and vector search. Other NoSQL players such as Redis and Apache Cassandra also have significant traction in India.


AI only works if the infrastructure is right

The successful implementation of artificial intelligence is therefore closely linked to the underlying infrastructure. But how you define that AI infrastructure is open to debate. An AI infrastructure always consists of different components, which is clearly reflected in the diverse backgrounds of the participating parties. As a customer, how can you best assess such an AI infrastructure? ... For companies looking to get started with AI infrastructure, a phased approach is crucial. Start small with a pilot, clearly define what you want to achieve, and expand step by step. The infrastructure must grow with the ambitions, not the other way around. A practical approach must be based on the objectives. Then the software, middleware, and hardware will be available. For virtually every use case, you can choose from the necessary and desired components. ... At the same time, the AI landscape requires a high degree of flexibility. Technological developments are rapid, models change, and business requirements can shift from quarter to quarter. It is therefore essential to establish an infrastructure that is not only scalable but also adaptable to new insights or shifting objectives. Consider the possibility of dynamically scaling computing capacity up or down, compressing models where necessary, and deploying tooling that adapts to the requirements of the use case. 


Software abstraction: The missing link in commercially viable quantum computing

Quantum Infrastructure Software delivers this essential abstraction, turning bare-metal QPUs into useful devices, much the way data center providers integrate virtualization software for their conventional systems. Current offerings cover all of the functions typically associated with the classical BIOS up through virtual machine Hypervisors, extending to developer tools at the application level. Software-driven abstraction of quantum complexity away from the end users lets anyone, irrespective of their quantum expertise, leverage quantum computing for the problems that matter most to them. ... With a finely tuned quantum computer accessible, a user must still execute many tasks to extract useful answers from the QPU, in analogy with the need for careful memory management required to gain practical acceleration with GPUs. Most importantly, in executing a real workload, they must convert high-level “assembly-language” logical definitions of quantum applications into hardware-specific “machine-language” instructions that account for the details of the QPU in use, and deploy countermeasures where errors might leak in. These are typically tasks that can only be handled by (expensive!) specialists in quantum-device operation.


Guest Post: Why AI Regulation Won’t Work for Quantum

Artificial intelligence regulation has been in the regulatory spotlight for the past seven to ten years and there is no shortage of governments and global institutions, as well as corporations and think tanks, putting forth regulatory frameworks in response to this widely buzzy tech. AI makes decisions in a “black box,” creating a need for “explainability” in order to fully understand how determinations by these systems affect the public. With the democratization of AI systems, there is the potential for bad actors to create harm in a decentralized ecosystem. ... Because quantum systems do not learn on their own, evolve over time, or make decisions based on training data, they do not pose the same kind of existential or social threats that AI does. Whereas the implications of quantum breakthroughs will no doubt be profound, especially in cryptography, defense, drug development, and material science, the core risks are tied to who controls the technology and for what purpose. Regulating who controls technology and ensuring bad actors are disincentivized from using technology in harmful ways is the stuff of traditional regulation across many sectors, so regulating quantum should prove somewhat less challenging than current AI regulatory debates would suggest.


Validation is an Increasingly Critical Element of Cloud Security

Security engineers simply don’t have the time or resources to familiarize themselves with the vast number of cloud services available today. In the past, security engineers primarily needed to understand Windows and Linux internals, Active Directory (AD) domain basics, networks and some databases and storage solutions. Today, they need to be familiar with hundreds of cloud services, from virtual machines (VMs) to serverless functions and containers at different levels of abstraction. ... It’s also important to note that cloud environments are particularly susceptible to misconfigurations. Security teams often primarily focus on assessing the performance of their preventative security controls, searching for weaknesses in their ability to detect attack activity. But this overlooks the danger posed by misconfigurations, which are not caused by bad code, software bugs, or malicious activity. That means they don’t fall within the definition of “vulnerabilities” that organizations typically test for—but they still pose a significant danger.  ... Securing the cloud isn’t just about having the right solutions in place — it’s about determining whether they are functioning correctly. But it’s also about making sure attackers don’t have other, less obvious ways into your network.


Build and Deploy Scalable Technical Architecture a Bit Easier

A critical challenge when transforming proof-of-concept systems into production-ready architecture is balancing rapid development with future scalability. At one organization, I inherited a monolithic Python application that was initially built as a lead distribution system. The prototype performed adequately in controlled environments but struggled when processing real-world address data, which, by their nature, contain inconsistencies and edge cases. ... Database performance often becomes the primary bottleneck in scaling systems. Domain-Driven Design (DDD) has proven particularly valuable for creating loosely coupled microservices, with its strategic phase ensuring that the design architecture properly encapsulates business capabilities, and the tactical phase allowing the creation of domain models using effective design patterns. ... For systems with data retention policies, table partitioning proved particularly effective, turning one table into several while maintaining the appearance of a single table to the application. This allowed us to implement retention simply by dropping entire partition tables rather than performing targeted deletions, which prevented database bloat. These optimizations reduced average query times from seconds to milliseconds, enabling support for much higher user loads on the same infrastructure.


What AI Policy Can Learn From Cyber: Design for Threats, Not in Spite of Them

The narrative that constraints kill innovation is both lazy and false. In cybersecurity, we’ve seen the opposite. Federal mandates like the Federal Information Security Modernization Act (FISMA), which forced agencies to map their systems, rate data risks, and monitor security continuously, and state-level laws like California’s data breach notification statute created the pressure and incentives that moved security from afterthought to design priority.  ... The irony is that the people who build AI, like their cybersecurity peers, are more than capable of innovating within meaningful boundaries. We’ve both worked alongside engineers and product leaders in government and industry who rise to meet constraints as creative challenges. They want clear rules, not endless ambiguity. They want the chance to build secure, equitable, high-performing systems — not just fast ones. The real risk isn’t that smart policy will stifle the next breakthrough. The real risk is that our failure to govern in real time will lock in systems that are flawed by design and unfit for purpose. Cybersecurity found its footing by designing for uncertainty and codifying best practices into adaptable standards. AI can do the same if we stop pretending that the absence of rules is a virtue.

Daily Tech Digest - July 04, 2025


Quote for the day:

"Rank does not confer privilege or give power. It imposes responsibility." -- Peter F. Drucker


The one secret to using genAI to boost your brain

Research published by Carnegie Mellon University this month found that groups that turned to Google Search came up with fewer creative ideas during brainstorming sessions compared to groups without access to Google Search. Not only did each Google Search group come up with the same ideas as the other Search groups, they also presented them in the same order, suggesting that the search results replaced their actual creativity. The researchers called this a “fixation effect.” When people see a few examples, they tend to get stuck on those and struggle to think beyond them. ... Our knowledge of and perspective on the world becomes less our own and more what the algorithms feed us. They do this by showing us content that triggers strong feelings — anger, joy, fear. Instead of feeling a full range of emotions, we bounce between extremes. Researchers call this “emotional dysregulation.” The constant flood of attention-grabbing posts can make it hard to focus or feel calm. AI algorithms on social grab our attention with endless new content. ... To elevate both the quality of your work and the performance of your mind, begin by crafting your paper, email, or post entirely on your own, without any assistance from genAI tools. Only after you have thoroughly explored a topic and pushed your own capabilities should you turn to chatbots, using them as a catalyst to further enhance your output, generate new ideas, and refine your results.


How AI-Powered DevSecOps Can Reinvent Agile Planning

The shift toward AI-enhanced agile planning requires a practical assessment of your current processes and tool chain.Start by evaluating whether your current processes create bottlenecks between development and deployment, looking for gaps where agile ceremonies exist, but traditional approval workflows still dominate critical path decisions. Next, assess how much time your teams spend on planning ceremonies versus actual development work. Consider whether AI could automate the administrative aspects, such as backlog grooming, estimation sessions and status updates, while preserving human strategic input on priorities and technical decisions. Examine your current tool chain to identify where manual coordination is required between the planning, development and deployment phases. Look for opportunities where AI can automate data synchronization and provide predictive insights about capacity and timeline risks, reducing the context switching that fragments developer focus. Finally, review your current planning overhead and identify which administrative tasks can be automated, allowing your team to focus on delivering customer value and making strategic technical decisions rather than adhering to process compliance. The goal is not to eliminate human judgment but to elevate it from routine tasks to the strategic thinking that drives innovation.


The AI power swingers

AI workloads - generative AI and the training of large language models in particular - demand a profusion of power in just a fraction of a second, and this in itself brings some complications. “When you are engaging a training model, you engage all of these GPUs simultaneously, and there’s a very quick rise to pretty much maximum power, and we are seeing that at a sub-second pace,” Ed Ansett, director at I3 Solutions, tells DCD. “The problem is that you have, for example, 50MW of IT load that the utility is about to see, but it will see it very quickly, and the utility won’t be able to respond that quickly. It will cause frequency problems, and the utility will almost certainly disconnect the data center, so there needs to be a way of buffering those workloads.” ... Despite this, AWS and Annapurna Labs have made some moves with the second generation of their home-grown AI accelerator - Trainium. These chips differ from GPUs in both an architectural standpoint, and their end capabilities. “If you look at GPU architecture, it's thousands of small tensor cores, small CPUs that are all running in parallel. Here, the architecture is called a systolic array, which is a completely different architecture,” says Gadi Hutt, director of product and customer engineering at Annapurna Labs. “Basically data flows through the logic of the systolic array that then does the efficient linear algebra acceleration.”


Why database security needs a seat at the cyber strategy table

Too often, security gaps exist because database environments are siloed from the broader IT infrastructure, making visibility and coordination difficult. This is especially true in hybrid environments, where legacy on-premises systems coexist with cloud-based assets. The lack of centralised oversight can allow misconfigurations and outdated software to go unnoticed, until it’s too late. ... Comprehensive monitoring plays a central role in securing database environments. Organisations need visibility into performance, availability, and security indicators in real time. Solutions like Paessler PRTG enable IT and security teams to proactively detect deviations from the norm, whether it’s a sudden spike in access requests or performance degradation that might signal malicious activity. Monitoring also helps bridge the gap between IT operations and security teams. ... Ultimately, database security is not just about technology, it’s about visibility, accountability, and ownership. Security teams must collaborate with database administrators, IT operations, and compliance functions to ensure policies are enforced, risks are mitigated, and monitoring is continuous.


Cyber Vaults: How Regulated Sectors Fight Cyberattacks

Cyber vaults are emerging as a key way organizations achieve that assurance. These highly secure environments protect immutable copies of mission-critical data – typically the data that enables the “minimum viable company” (i.e. the essential functions and systems that must remain). Cyber vaults achieve this by creating logically and physically isolated environments that sever the connection between production and backup systems. This isolation ensures known-good copies remain untouched, ready for recovery in the event of a ransomware attack. ... A cyber vault only fulfills its promise when built on all three pillars. Each must function as an enforceable control. Increasingly, boards and regulators aren’t just expecting these controls — they’re demanding proof they are in place, operational, and effective. Leave one out, and the entire recovery strategy is at risk. ... In regulated industries, failure to demonstrate recoverability can lead to fines, public scrutiny, and regulatory sanctions. These pressures are elevating backup and recovery from IT hygiene to boardroom priority, where resilience is increasingly viewed as a fiduciary responsibility. Organizations are coming to terms with a new reality: prevention will fail. Recovery is what defines resilience. It’s not just about whether you have backups – it’s whether you can trust them to work when it matters most.


Africa’s cybersecurity crisis and the push to mobilizing communities to safeguard a digital future

Trained youth are also acting as knowledge multipliers. After receiving foundational cybersecurity education, many go on to share their insights with parents, siblings, and local networks. This creates a ripple effect of awareness and behavioral change, extending far beyond formal institutions. In regions where internet use is rising faster than formal education systems can adapt, such peer-to-peer education is proving invaluable. Beyond defense, cybersecurity also offers a pathway to economic opportunity. As demand for skilled professionals grows, early exposure to the field can open doors to employment in both local and global markets. This supports broader development goals by linking digital safety with job creation and innovation. ... Africa’s digital future cannot be built on insecure foundations. Cybersecurity is not a luxury, it is a prerequisite for sustainable growth, social trust, and national security. Grassroots efforts across the continent are already demonstrating that meaningful progress is possible, even in resource-constrained environments. However, these efforts must be scaled, formalized, and supported at the highest levels. By equipping communities, especially youth, with the knowledge and tools to defend themselves online, a resilient digital culture can be cultivated from the ground up. 


Gartner unveils top strategic software engineering trends for 2025 and beyond

Large language models (LLMs) are enabling software to interact more intelligently and autonomously. Gartner predicts that by 2027, 55% of engineering teams will build LLM-based features. Successful adoption will require rethinking strategies, upskilling teams, and implementing robust guardrails for risk management. ... Organizations will increasingly integrate generative AI (GenAI) capabilities into internal developer platforms, with 70% of platform teams expected to do so by 2027. This trend supports discoverability, security, and governance while accelerating AI-powered application development. ... High talent density—building teams with a high concentration of skilled professionals—will be a crucial differentiator. Organizations should go beyond traditional hiring to foster a culture of continuous learning and collaboration, enabling greater adaptability and customer value delivery. ... Open GenAI models are gaining traction for their flexibility, cost-effectiveness, and freedom from vendor lock-in. By 2028, Gartner forecasts 30% of global GenAI spending will target open models customized for domain-specific needs, making advanced AI more accessible and affordable. ... Green software engineering will become vital to meet sustainability goals, focusing on carbon-efficient and carbon-aware practices across the entire software lifecycle. 


6 techniques to reduce cloud observability cost

Cloud observability is really important for most modern organizations in that it dives deep when it comes down to keeping application functionality, problems, and those little bumps in the road along the way, a smooth overall user experience. Meanwhile, the growing toll of telemetry data that keeps piling up, such as logs, metrics and traces, becomes costlier by the minute. But one thing is clear: You do not have to compromise visibility just to reduce costs. ... For high-volume data streams (especially traces and logs), consider some intelligent sampling methods that will allow you to capture a statistically significant subset of data, thus reducing volume while still allowing for anomaly detection and trend analysis. ... Consider the periodicity of metric collection. Do you really need to scrape metrics every 10 seconds, when every 60 seconds would have been enough to get a view of this service? Adjusting these intervals can greatly reduce the number of data points. ... Utilize autoscaling to automatically scale the compute capacity proportional to demand so that you pay only for what you actually use. This removes over-allocation of resources in the low usage times. ... For predictable workloads, check out the discounts offered by the cloud providers in the form of Reserved Instances or Savings Plans. For the fault-tolerant and interruptible workloads, Spot instances offer a considerable discount. ...


Three Tech Moves That Pay Off for Small Banks Fighting Margin Squeeze

For smaller institutions looking to strengthen profitability, tech may now be the most direct and controllable lever they have at their disposal. Projects that might once have been seen as back-office improvements are starting to look like strategic choices that can enhance performance — helping banks reduce cost per account, increase revenue per relationship, and act with greater speed and precision. ... Banks are using robotic process automation (RPA) to handle repetitive, rules-based tasks that previously absorbed valuable staff time. These bots operate across existing platforms, moving data, triggering workflows, and reducing manual error without requiring major system changes. In parallel, chatbot-style knowledge hubs help frontline staff find information quickly and consistently — reducing service bottlenecks and freeing people to focus on more complex work. These are targeted projects with measurable operational impact. ... Banks can also use customer data to guide customer acquisition strategies. By combining internal insights with external datasets — like NAICS codes, firmographics, or geographic overlays — institutions can build profiles of their most valuable relationships and find others that fit the same criteria. This kind of modeling helps sales and business development teams focus on the opportunities with the greatest long-term potential.


The Cost of Bad Data: Why Time Series Integrity Matters More Than You Think

Data plays a critical role in shaping operational decisions. From sensor streams in factories to API response times in cloud environments, organizations rely on time-stamped metrics to understand what’s happening and determine what to do next. But when that data is inaccurate or incomplete, systems make the wrong call. Teams waste time chasing false alerts, miss critical anomalies, and make high-stakes decisions based on flawed assumptions. When trust in data breaks down, risk increases, response slows, and costs rise. ... Real-time transformations shape data as it flows through the system. Apache Arrow Flight enables high-speed streaming, and SQL or Python logic transforms values on the fly. Whether enriching metadata, filtering out noise, or converting units, InfluxDB 3 handles these tasks before data reaches long-term storage. A manufacturing facility using temperature sensors in production lines can automatically convert Fahrenheit to Celsius, label zones, and discard noisy heartbeat values before the data hits storage. This gives operators clean dashboards and real-time control insights without requiring extra processing time or manual data correction—saving teams hours of rework and helping businesses maintain fast, reliable decision-making under pressure.

Daily Tech Digest - July 03, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti


The Goldilocks Theory – preparing for Q-Day ‘just right’

When it comes to quantum readiness, businesses currently have two options: Quantum key distribution (QKD) and post quantum cryptography (PQC). Of these, PQC reigns supreme. Here’s why. On the one hand, you have QKD which leverages principles of quantum physics, such as superposition, to securely distribute encryption keys. Although great in theory, it needs extensive new infrastructure, including bespoke networks and highly specialised hardware. More importantly, it also lacks authentication capabilities, severely limiting its practical utility. PQC, on the other hand, comprises classical cryptographic algorithms specifically designed to withstand quantum attacks. It can be integrated into existing digital infrastructures with minimal disruption. ... Imagine installing new quantum-safe algorithms prematurely, only to discover later they’re vulnerable, incompatible with emerging standards, or impractical at scale. This could have the opposite effect and could inadvertently increase attack surface and bring severe operational headaches, ironically becoming less secure. But delaying migration for too long also poses serious risks. Malicious actors could be already harvesting encrypted data, planning to decrypt it when quantum technology matures – so businesses protecting sensitive data such as financial records, personal details, intellectual property cannot afford indefinite delays.


Sovereign by Design: Data Control in a Borderless World

The regulatory framework for digital sovereignty is a national priority. The EU has set the pace with GDPR and GAIA-X. It prioritizes data residency and local infrastructure. China's cybersecurity law and personal information protection law enforce strict data localization. India's DPDP Act mandates local storage for sensitive data, aligning with its digital self-reliance vision through platforms such as Aadhaar. Russia's federal law No. 242-FZ requires citizen data to stay within the country for the sake of national security. Australia's privacy act focuses on data privacy, especially for health records, and Canada's PIPEDA encourages local storage for government data. Saudi Arabia's personal data protection law enforces localization for sensitive sectors, and Indonesia's personal data protection law covers all citizen-centric data. Singapore's PDPA balances privacy with global data flows, and Brazil's LGPD, mirroring the EU's GDPR, mandates the protection of privacy and fundamental rights of its citizens. ... Tech companies have little option but to comply with the growing demands of digital sovereignty. For example, Amazon Web Services has a digital sovereignty pledge, committing to "a comprehensive set of sovereignty controls and features in the cloud" without compromising performance.


Agentic AI Governance and Data Quality Management in Modern Solutions

Agentic AI governance is a framework that ensures artificial intelligence systems operate within defined ethical, legal, and technical boundaries. This governance is crucial for maintaining trust, compliance, and operational efficiency, especially in industries such as Banking, Financial Services, Insurance, and Capital Markets. In tandem with robust data quality management, Agentic AI governance can substantially enhance the reliability and effectiveness of AI-driven solutions. ... In industries such as Banking, Financial Services, Insurance, and Capital Markets, the importance of Agentic AI governance cannot be overstated. These sectors deal with vast amounts of sensitive data and require high levels of accuracy, security, and compliance. Here’s why Agentic AI governance is essential: Enhanced Trust: Proper governance fosters trust among stakeholders by ensuring AI systems are transparent, fair, and reliable. Regulatory Compliance: Adherence to legal and regulatory requirements helps avoid penalties and safeguard against legal risks. Operational Efficiency: By mitigating risks and ensuring accuracy, AI governance enhances overall operational efficiency and decision-making. Protection of Sensitive Data: Robust governance frameworks protect sensitive financial data from breaches and misuse, ensuring privacy and security. 


Fundamentals of Dimensional Data Modeling

Keeping the dimensions separate from facts makes it easier for analysts to slice-and-dice and filter data to align with the relevant context underlying a business problem. Data modelers organize these facts and descriptive dimensions into separate tables within the data warehouse, aligning them with the different subject areas and business processes. ... Dimensional modeling provides a basis for meaningful analytics gathered from a data warehouse for many reasons. Its processes lead to standardizing dimensions through presenting the data blueprint intuitively. Additionally, dimensional data modeling proves to be flexible as business needs evolve. The data warehouse updates technology according to the concept of slowly changing dimensions (SCD) as business contexts emerge. ... Alignment in the design requires these processes, and data governance plays an integral role in getting there. Once the organization is on the same page about the dimensional model’s design, it chooses the best kind of implementation. Implementation choices include the star or snowflake schema around a fact. When organizations have multiple facts and dimensions, they use a cube. A dimensional model defines how technology needs to build a data warehouse architecture or one of its components using good design and implementation.


IDE Extensions Pose Hidden Risks to Software Supply Chain

The latest research, published this week by application security vendor OX Security, reveals the hidden dangers of verified IDE extensions. While IDEs provide an array of development tools and features, there are a variety of third-party extensions that offer additional capabilities and are available in both official marketplaces and external websites. ... But OX researchers realized they could add functionality to verified extensions after the fact and still maintain the checkmark icon. After analyzing traffic for Visual Studio Code, the researchers found a server request to the marketplace that determines whether the extension is verified; they discovered they could modify the values featured in the server request and maintain the verification status even after creating malicious versions of the approved extensions. ... Using this attack technique, a threat actor could inject malicious code into verified and seemingly safe extensions that would maintain their verified status. "This can result in arbitrary code execution on developers' workstations without their knowledge, as the extension appears trusted," Siman-Tov Bustan and Zadok wrote. "Therefore, relying solely on the verified symbol of extensions is inadvisable." ... "It only takes one developer to download one of these extensions," he says. "And we're not talking about lateral movement. ..."


Business Case for Agentic AI SOC Analysts

A key driver behind the business case for agentic AI in the SOC is the acute shortage of skilled security analysts. The global cybersecurity workforce gap is now estimated at 4 million professionals, but the real bottleneck for most organizations is the scarcity of experienced analysts with the expertise to triage, investigate, and respond to modern threats. One ISC2 survey report from 2024 shows that 60% of organizations worldwide reported staff shortages significantly impacting their ability to secure the organizations, with another report from the World Economic Forum showing that just 15% of organizations believe they have the right people with the right skills to properly respond to a cybersecurity incident. Existing teams are stretched thin, often forced to prioritize which alerts to investigate and which to leave unaddressed. As previously mentioned, the flood of false positives in most SOCs means that even the most experienced analysts are too distracted by noise, increasing exposure to business-impacting incidents. Given these realities, simply adding more headcount is neither feasible nor sustainable. Instead, organizations must focus on maximizing the impact of their existing skilled staff. The AI SOC Analyst addresses this by automating routine Tier 1 tasks, filtering out noise, and surfacing the alerts that truly require human judgment. 


Microservice Madness: Debunking Myths and Exposing Pitfalls

Microservices will reduce dependencies, because it forces you to serialize your types into generic graph objects (read; JSON or XML or something similar). This implies that you can just transform your classes into a generic graph object at its interface edges, and accomplish the exact same thing. ... There are valid arguments for using message brokers, and there are valid arguments for decoupling dependencies. There are even valid points of scaling out horizontally by segregating functionality on to different servers. But if your argument in favor of using microservices is "because it eliminates dependencies," you're either crazy, corrupt through to the bone, or you have absolutely no idea what you're talking about (make your pick!) Because you can easily achieve the same amount of decoupling using Active Events and Slots, combined with a generic graph object, in-process, and it will execute 2 billion times faster in production than your "microservice solution" ... "Microservice Architecture" and "Service Oriented Architecture" (SOA) have probably caused more harm to our industry than the financial crisis in 2008 caused to our economy. And the funny thing is, the damage is ongoing because of people repeating mindless superstitious belief systems as if they were the truth.


Sustainability and social responsibility

Direct-to-chip liquid cooling delivers impressive efficiency but doesn’t manage the entire thermal load. That’s why hybrid systems that combine liquid and traditional air cooling are increasingly popular. These systems offer the ability to fine-tune energy use, reduce reliance on mechanical cooling, and optimize server performance. HiRef offers advanced cooling distribution units (CDUs) that integrate liquid-cooled servers with heat exchangers and support infrastructure like dry coolers and dedicated high-temperature chillers. This integration ensures seamless heat management regardless of local climate or load fluctuations. ... With liquid cooling systems capable of operating at higher temperatures, facilities can increasingly rely on external conditions for passive cooling. This shift not only reduces electricity usage, but also allows for significant operational cost savings over time. But this sustainable future also depends on regulatory compliance, particularly in light of the recently updated F-Gas Regulation, which took effect in March 2024. The EU regulation aims to reduce emissions of fluorinated greenhouse gases to net-zero by 2050 by phasing out harmful high-GWP refrigerants like HFCs. “The F-Gas regulation isn’t directly tailored to the data center sector,” explains Poletto.


Infrastructure Operators Leaving Control Systems Exposed

Threat intelligence firm Censys has scanned the internet twice a month for the last six months, looking for a representative sample composed of four widely used types of ICS devices publicly exposed to the internet. Overall exposure slightly increased from January through June, the firm said Monday. One of the devices Censys scanned for is programmable logic controllers made by an Israel-based Unitronics. The firm's Vision-series devices get used in numerous industries, including the water and wastewater sector. Researchers also counted publicly exposed devices built by Israel-based Orpak - a subsidiary of Gilbarco Veeder-Root - that run SiteOmat fuel station automation software. It also looked for devices made by Red Lion that are widely deployed for factory and process automation, as well as in oil and gas environments. It additionally probed for instances of a facilities automation software framework known as Niagara, made by Tridium. ... Report author Emily Austin, principal security researcher at Censys, said some fluctuation over time isn't unusual, given how "services on the internet are often ephemeral by nature." The greatest number of publicly exposed systems were in the United States, except for Unitronics, which are also widely used in Australia.


Healthcare CISOs must secure more than what’s regulated

Security must be embedded early and consistently throughout the development lifecycle, and that requires cross-functional alignment and leadership support. Without an understanding of how regulations translate into practical, actionable security controls, CISOs can struggle to achieve traction within fast-paced development environments. ... Security objectives should be mapped to these respective cycles—addressing tactical issues like vulnerability remediation during sprints, while using PI planning cycles to address larger technical and security debt. It’s also critical to position security as an enabler of business continuity and trust, rather than a blocker. Embedding security into existing workflows rather than bolting it on later builds goodwill and ensures more sustainable adoption. ... The key is intentional consolidation. We prioritize tools that serve multiple use cases and are extensible across both DevOps and security functions. For example, choosing solutions that can support infrastructure-as-code security scanning, cloud posture management, and application vulnerability detection within the same ecosystem. Standardizing tools across development and operations not only reduces overhead but also makes it easier to train teams, integrate workflows, and gain unified visibility into risk.

Daily Tech Digest - July 02, 2025


Quote for the day:

"Success is not the absence of failure; it's the persistence through failure." -- Aisha Tyle


How cybersecurity leaders can defend against the spur of AI-driven NHI

Many companies don’t have lifecycle management for all their machine identities and security teams may be reluctant to shut down old accounts because doing so might break critical business processes. ... Access-management systems that provide one-time-use credentials to be used exactly when they are needed are cumbersome to set up. And some systems come with default logins like “admin” that are never changed. ... AI agents are the next step in the evolution of generative AI. Unlike chatbots, which only work with company data when provided by a user or an augmented prompt, agents are typically more autonomous, and can go out and find needed information on their own. This means that they need access to enterprise systems, at a level that would allow them to carry out all their assigned tasks. “The thing I’m worried about first is misconfiguration,” says Yageo’s Taylor. If an AI agent’s permissions are set incorrectly “it opens up the door to a lot of bad things to happen.” Because of their ability to plan, reason, act, and learn AI agents can exhibit unpredictable and emergent behaviors. An AI agent that’s been instructed to accomplish a particular goal might find a way to do it in an unanticipated way, and with unanticipated consequences. This risk is magnified even further, with agentic AI systems that use multiple AI agents working together to complete bigger tasks, or even automate entire business processes. 


The silent backbone of 5G & beyond: How network APIs are powering the future of connectivity

Network APIs are fueling a transformation by making telecom networks programmable and monetisable platforms that accelerate innovation, improve customer experiences, and open new revenue streams.  ... Contextual intelligence is what makes these new-generation APIs so attractive. Your needs change significantly depending on whether you’re playing a cloud game, streaming a match, or participating in a remote meeting. Programmable networks can now detect these needs and adjust dynamically. Take the example of a user streaming a football match. With network APIs, a telecom operator can offer temporary bandwidth boosts just for the game’s duration. Once it ends, the network automatically reverts to the user’s standard plan—no friction, no intervention. ... Programmable networks are expected to have the greatest impact in Industry 4.0, which goes beyond consumer applications. ... 5G combined IOT and with network APIs enables industrial systems to become truly connected and intelligent. Remote monitoring of manufacturing equipment allows for real-time maintenance schedule adjustments based on machine behavior. Over a programmable, secure network, an API-triggered alert can coordinate a remote diagnostic session and even start remedial actions if a fault is found.


Quantum Computers Just Reached the Holy Grail – No Assumptions, No Limits

A breakthrough led by Daniel Lidar, a professor of engineering at USC and an expert in quantum error correction, has pushed quantum computing past a key milestone. Working with researchers from USC and Johns Hopkins, Lidar’s team demonstrated a powerful exponential speedup using two of IBM’s 127-qubit Eagle quantum processors — all operated remotely through the cloud. Their results were published in the prestigious journal Physical Review X. “There have previously been demonstrations of more modest types of speedups like a polynomial speedup, says Lidar, who is also the cofounder of Quantum Elements, Inc. “But an exponential speedup is the most dramatic type of speed up that we expect to see from quantum computers.” ... What makes a speedup “unconditional,” Lidar explains, is that it doesn’t rely on any unproven assumptions. Prior speedup claims required the assumption that there is no better classical algorithm against which to benchmark the quantum algorithm. Here, the team led by Lidar used an algorithm they modified for the quantum computer to solve a variation of “Simon’s problem,” an early example of quantum algorithms that can, in theory, solve a task exponentially faster than any classical counterpart, unconditionally.


4 things that make an AI strategy work in the short and long term

Most AI gains came from embedding tools like Microsoft Copilot, GitHub Copilot, and OpenAI APIs into existing workflows. Aviad Almagor, VP of technology innovation at tech company Trimble, also notes that more than 90% of Trimble engineers use Github Copilot. The ROI, he says, is evident in shorter development cycles, and reduced friction in HR and customer service. Moreover, Trimble has introduced AI into their transportation management system, where AI agents optimize freight procurement by dynamically matching shippers and carriers. ... While analysts often lament the difficulty of showing short-term ROI for AI projects, these four organizations disagree — at least in part. Their secret: flexible thinking and diverse metrics. They view ROI not only as dollars saved or earned, but also as time saved, satisfaction increased, and strategic flexibility gained. London says that Upwave listens for customer signals like positive feedback, contract renewals, and increased engagement with AI-generated content. Given the low cost of implementing prebuilt AI models, even modest wins yield high returns. For example, if a customer cites an AI-generated feature as a reason to renew or expand their contract, that’s taken as a strong ROI indicator. Trimble uses lifecycle metrics in engineering and operations. For instance, one customer used Trimble AI tools to reduce the time it took to perform a tunnel safety analysis from 30 minutes to just three.


How IT Leaders Can Rise to a CIO or Other C-level Position

For any IT professional who aspires to become a CIO, the key is to start thinking like a business leader, not just a technologist, says Antony Marceles, a technology consultant and founder of software staffing firm Pumex. "This means taking every opportunity to understand the why behind the technology, how it impacts revenue, operations, and customer experience," he explained in an email. The most successful tech leaders aren't necessarily great technical experts, but they possess the ability to translate tech speak into business strategy, Marceles says, adding that "Volunteering for cross-functional projects and asking to sit in on executive discussions can give you that perspective." ... CIOs rarely have solo success stories; they're built up by the teams around them, Marceles says. "Colleagues can support a future CIO by giving honest feedback, nominating them for opportunities, and looping them into strategic conversations." Networking also plays a pivotal role in career advancement, not just for exposure, but for learning how other organizations approach IT leadership, he adds. Don't underestimate the power of having an executive sponsor, someone who can speak to your capabilities when you’re not there to speak for yourself, Eidem says. "The combination of delivering value and having someone champion that value -- that's what creates real upward momentum."


SLMs vs. LLMs: Efficiency and adaptability take centre stage

SLMs are becoming central to Agentic AI systems due to their inherent efficiency and adaptability. Agentic AI systems typically involve multiple autonomous agents that collaborate on complex, multi-step tasks and interact with environments. Fine-tuning methods like Reinforcement Learning (RL) effectively imbue SLMs with task-specific knowledge and external tool-use capabilities, which are crucial for agentic operations. This enables SLMs to be efficiently deployed for real-time interactions and adaptive workflow automation, overcoming the prohibitive costs and latency often associated with larger models in agentic contexts. ... Operating entirely on-premises ensures that decisions are made instantly at the data source, eliminating network delays and safeguarding sensitive information. This enables timely interpretation of equipment alerts, detection of inventory issues, and real-time workflow adjustments, supporting faster and more secure enterprise operations. SLMs also enable real-time reasoning and decision-making through advanced fine-tuning, especially Reinforcement Learning. RL allows SLMs to learn from verifiable rewards, teaching them to reason through complex problems, choose optimal paths, and effectively use external tools. 


Quantum’s quandary: racing toward reality or stuck in hyperbole?

One important reason is for researchers to demonstrate their advances and show that they are adding value. Quantum computing research requires significant expenditure, and the return on investment will be substantial if a quantum computer can solve problems previously deemed unsolvable. However, this return is not assured, nor is the timeframe for when a useful quantum computer might be achievable. To continue to receive funding and backing for what ultimately is a gamble, researchers need to show progress — to their bosses, investors, and stakeholders. ... As soon as such announcements are made, scientists and researchers scrutinize them for weaknesses and hyperbole. The benchmarks used for these tests are subject to immense debate, with many critics arguing that the computations are not practical problems or that success in one problem does not imply broader applicability. In Microsoft’s case, a lack of peer-reviewed data means there is uncertainty about whether the Majorana particle even exists beyond theory. The scientific method encourages debate and repetition, with the aim of reaching a consensus on what is true. However, in quantum computing, marketing hype and the need to demonstrate advancement take priority over the verification of claims, making it difficult to place these announcements in the context of the bigger picture.


Ethical AI for Product Owners and Product Managers

As the product and customer information steward, the PO/PM must lead the process of protecting sensitive data. The Product Backlog often contains confidential customer feedback, competitive analysis, and strategic plans that cannot be exposed. This guardrail requires establishing clear protocols for what data can be shared with AI tools. A practical first step is to lead the team in a data classification exercise, categorizing information as Public, Internal, or Restricted. Any data classified for internal use, such as direct customer quotes, must be anonymized before being used in an AI prompt. ... AI is proficient at generating text but possesses no real-world experience, empathy, or strategic insight. This guardrail involves proactively defining the unique, high-value work that AI can assist but never replace. Product leaders should clearly delineate between AI-optimal tasks, creating first drafts of technical user stories, summarizing feedback themes, or checking for consistency across Product Backlog items and PO/PM-essential areas. These human-centric responsibilities include building genuine empathy through stakeholder interviews, making difficult strategic prioritization trade-offs, negotiating scope, resolving conflicting stakeholder needs, and communicating the product vision. By modeling this partnership and using AI as an assistant to prepare for strategic work, the PO/PM reinforces that their core value lies in strategy, relationships, and empathy.


Sharded vs. Distributed: The Math Behind Resilience and High Availability

In probability theory, independent events are events whose outcomes do not affect each other. For example, when throwing four dice, the number displayed on each dice is independent of the other three dice. Similarly, the availability of each server in a six-node application-sharded cluster is independent of the others. This means that each server has an individual probability of being available or unavailable, and the failure of one server is not affected by the failure or otherwise of other servers in the cluster. In reality, there may be shared resources or shared infrastructure that links the availability of one server to another. In mathematical terms, this means that the events are dependent. However, we consider the probability of these types of failures to be low, and therefore, we do not take them into account in this analysis.  ... Traditional architectures are limited by single-node failure risk. Application-level sharding compounds this problem because if any node goes down, its shard and therefore the total system becomes unavailable. In contrast, distributed databases with quorum-based consensus (like YugabyteDB) provide fault tolerance and scalability, enabling higher resilience and improved availability.


How FinTechs are turning GRC into a strategic enabler

The misconception that risk management and innovation exist in tension is one that modern FinTechs must move beyond. At its core, cybersecurity – when thoughtfully integrated – serves not as a brake but as an enabler of innovation. The key is to design governance structures that are both intelligent and adaptive (and resilient in itself). The foundation lies in aligning cybersecurity risk management with the broader business objective: enablement. This means integrating security thinking early in the innovation cycle, using standardized interfaces, expectations, and frameworks that don’t obstruct, but rather channel innovation safely. For instance, when risk statements are defined consistently across teams, decisions can be made faster and with greater confidence. Critically, it starts with the threat model. A well-defined, enterprise-level threat model is the compass that guides risk assessments and controls where they matter most. Yet many companies still operate without a clear articulation of their own threat landscape, leaving their enterprise risk strategies untethered from reality. Without this grounding, risk management becomes either overly cautious or blindly permissive, or a bit of both. We place a strong emphasis on bridging the traditional silos between GRC, IT Security, Red Teaming, and Operational teams.

Daily Tech Digest - July 01, 2025


Quote for the day:

"Listen with curiosity, speak with honesty act with integrity." -- Roy T Bennett


CIOs rethink public cloud as AI strategies mature

Regulatory and compliance concerns are a big driver toward the private cloud or on-premises solutions, says Bastien Aerni, vice president of strategy and technology adoption at GTT. Many companies are shifting their sensitive workloads to private clouds as a piece of broader multicloud and hybrid strategies to support agentic AI and other complex AI initiatives, he adds. “Most of the time, AI is touching confidential data or business-critical data,” Aerni says. “Then the thinking about the architecture and what the workload should be public vs. private, or even on-prem, is becoming a true question.” The public cloud still provides maximum scalability for AI projects, and in recent years, CIOs have been persuaded by the number of extra capabilities available there, he says. “In some of the conversations I had with CIOs, let’s say five years ago, they were mentioning, ‘There are so many features, so many tools,’” Aerni adds. ... “The paradox is clear: AI workloads are driving both massive cloud growth and selective repatriation simultaneously, because the market is expanding so rapidly it’s accommodating multiple deployment models at once,” Kirschner says. “What we are seeing is the maturation from a naive ‘everything-to-the-cloud’ strategy toward intelligent, workload-specific decisions.”


India’s DPDP law puts HR under the microscope—Here’s why that’s a good thing

At first glance, DPDP appears to mirror other data privacy frameworks like GDPR or CCPA. There’s talk of consent, purpose limitation, secure storage, and rights of the data principal (i.e., the individual). But the Indian legislation’s implications ripple far beyond IT configurations or privacy policies. “Mention data protection, and it often gets handed off to the legal or IT teams,” says Gupta. “But that misses the point. Every team that touches personal data is responsible under this law.” For HR departments, this shift is seismic. Gupta underscores how HR sits atop a “goldmine” of personal information—addresses, Aadhaar numbers, medical history, performance reviews, family details, even biometric data in some cases. And this isn't limited to employees; applicants and former workers are also in scope. ... With India housing thousands of global capability centres and outsourcing hubs, DPDP challenges multinationals to look inward. The emphasis so far has been on protecting customer data under global laws like GDPR. But now, internal data practices—especially around employees—are under the scanner. “DPDP is turning the lens inward,” says Gupta. “If your GCC in India tightens data practices, it won’t make sense to be lax elsewhere.”


3 ways developers should rethink their data stack for GenAI success

Traditional data stacks optimized for analytics, for the most part, don’t naturally support the vector search and semantic retrieval patterns that GenAI applications require. Thus, real-time GenAI data architectures need native support for embedding generation and vector storage as first-class citizens. This could mean integrating data with vector databases like Pinecone, Weaviate, or Chroma as part of the core infrastructure. It may also mean searching for multi-modal databases that can support all of your required data types out of the box without needing a bunch of separate platforms. Regardless of the underlying infrastructure, plan for needing hybrid search capabilities that combine traditional keyword search with semantic similarity, and consider how you’ll handle embedding model updates and re-indexing. ... Maintaining data relationships and ensuring consistent access patterns across these different storage systems is the real challenge when working with these various data types. While some platforms are beginning to offer enhanced vector search capabilities that can work across different data types, most organizations still need to architect solutions that coordinate multiple storage systems. The key is to design these multi-modal capabilities into your data stack early, rather than trying to bolt them on later when your GenAI applications demand richer data integration. 


Cyber Hygiene Protecting Your Digital and Financial Health

Digital transformation has reshaped the commercial world, integrating technology into nearly every aspect of operations. That has brought incredible opportunities, but it has also opened doors to new threats. Cyber attacks are more frequent and sophisticated, with malevolent actors targeting everyone from individuals to major corporations and entire countries. It is no exaggeration to say that establishing, and maintaining, effective cyber hygiene has become indispensable. According to Microsoft’s 2023 Digital Defense Report, effective cyber hygiene could prevent 99% of cyber attacks. Yet cyber hygiene is not just about preventing attacks, it is also central to maintaining operational stability and resilience in the event of a cyber breach. In that event robust cyber hygiene can limit the operational, financial, and reputational impact of a cyber attack, thereby enhancing an entity’s overall risk profile. ... Even though it’s critical, data suggests that many organizations struggle to implement even basic cyber security measures effectively. For example, a 2024 survey by Extrahop, a Seattle-based cyber security services provider, found that over half of the respondents admitted to using at least one unsecured network protocol, making them susceptible to attacks.


Are Data Engineers Sleepwalking Towards AI Catastrophe?

Data engineers are already overworked. Weigel cited a study that indicated 80% of data engineering teams are already overloaded. But when you add AI and unstructured data to the mix, the workload issue becomes even more acute. Agentic AI provides a potential solution. It’s natural that overworked data engineering teams will turn to AI for help. There’s a bevy of providers building copilots and swarms of AI agents that, ostensibly, can build, deploy, monitor, and fix data pipelines when they break. We are already seeing agentic AI have real impacts on data engineering teams, as well as the downstream data analysts who ultimately are the ones requesting the data in the first place. ... Once human data engineers are out of the loop, bad things can start happening, Weigel said. They potentially face a situation where the volume of data requests–which originally were served by human data engineers but now are being served by AI agents–is beyond their capability to keep up. ... “We’re now back in the dark ages, where we were 10 years ago [when we wondered] why we need data warehouses,” he said. “I know that if person A, B, and C ask a question, and previously they wrote their own queries, they got different results. Right now, we ask the same agent the same question, and because they’re non-deterministic, they will actually create different queries every time you ask it. 


How cybercriminals are weaponizing AI and what CISOs should do about it

Security teams are using AI to keep up with the pace of AI-powered cybercrime, scanning large volumes of data to surface threats earlier. AI helps scan massive amounts of threat data, surface patterns, and prioritize investigations. For example, analysts used AI to uncover a threat actor’s alternate Telegram channels, saving significant manual effort. Another use case: linking sockpuppet accounts. By analyzing slang, emojis, and writing styles, AI can help uncover connections between fake personas, even when their names and avatars are different. AI also flags when a new tactic starts gaining traction on forums or social media. ... As more defenders turn to AI to make sense of vast amounts of threat data, it’s easy to assume that LLMs can handle everything on their own. But interpreting chatter from the underground is not something AI can do well without help. “This diffuse environment, rich in vernacular and slang, poses a hurdle for LLMs that are typically trained on more generic or public internet data,” Ian Gray, VP of Cyber Threat Intelligence at Flashpoint, told Help Net Security. The problem goes deeper than just slang. Threat actors often communicate across multiple niche platforms, each with its own shorthand and tone. 


How To Keep AI From Making Us Stupid

The allure of AI is undeniable. It drafts emails, summarizes lengthy reports, generates code snippets, and even whips up images faster than you can say “neural network.” This unprecedented convenience, however, carries a subtle but potent risk. A study from MIT has highlighted concerns that overuse of AI tools might be degrading our thinking capabilities. That degradation is the digital equivalent of using a GPS so much that you forget how to read a map. Suddenly, your internal compass points vaguely toward convenience and not much else. When we offload critical cognitive tasks entirely to AI, our muscles for those tasks can begin to atrophy, leading to cognitive offloading. ... Treat AI-generated content like a highly caffeinated first draft — full of energy but possibly a little messy and prone to making things up. Your job isn’t to simply hit “generate” and walk away, unless you enjoy explaining AI hallucinations or factual inaccuracies to your boss. Or worse, your audience. Always, always, aggressively edit, proofread, and, most critically, fact-check every single output. ... The real risk isn’t AI taking over our jobs; it’s us letting AI take over our brains. To maintain your analytical edge, continuously challenge yourself. Practice skills that AI complements but doesn’t replace, such as critical thinking, complex problem-solving, nuanced synthesis, ethical judgment, and genuine human creativity.


Governance meets innovation: Protiviti’s strategy for secure, scalable growth in BFSI and beyond

In today’s BFSI landscape, technology alone is no longer a differentiator. True competitive advantage lies in the orchestration of innovation with governance. The deployment of AI in underwriting, the migration of customer data to the cloud, or the use of IoT in insurance all bring immense opportunity—but also profound risks. Without strong guardrails, these initiatives can expose firms to cyber threats, data sovereignty violations, and regulatory scrutiny. Innovation without governance is a gamble; governance without innovation is a graveyard. ... In cloud transformation projects, for instance, we work with clients to proactively assess data localisation risks, cloud governance maturity, and third-party exposures, ensuring resilience is designed from day one. As AI adoption scales across financial services, we bring deep expertise in Responsible AI governance. From ethical frameworks and model explainability to regulatory alignment with India’s DPDP Act and the EU AI Act, our solutions ensure that automated systems remain transparent, auditable, and trustworthy. Our AI risk models integrate regulatory logic into system design, bridging the gap between innovation and accountability.


Cybercriminals take malicious AI to the next level

Cybercriminals are tailoring AI models for specific fraud schemes, including generating phishing emails tailored by sector or language, as well as writing fake job posts, invoices, or verification prompts. “Some vendors even market these tools with tiered pricing, API access, and private key licensing, mirroring the [legitimate] SaaS economy,” Flashpoint researchers found. “This specialization leads to potentially greater success rates and automated complex attack stages,” Flashpoint’s Gray tells CSO. ... Cybercrime vendors are also lowering the barrier for creating synthetic video and voice, with deepfake as a service (DaaS) offerings ... “This ‘prompt engineering as a service’ (PEaaS) lowers the barrier for entry, allowing a wider range of actors to leverage sophisticated AI capabilities through pre-packaged malicious prompts,” Gray warns. “Together, these trends create an adaptive threat: tailored models become more potent when refined with illicit data, PEaaS expands the reach of threat actors, and the continuous refinement ensures constant evolution against defenses,” he says. ... Enterprises need to balance automation with expert analysis, separating hype from reality, and continuously adapt to the rapidly evolving threat landscape. “Defenders should start by viewing AI as an augmentation of human expertise, not a replacement,” Flashpoint’s Gray says. 


“DevOps is Dead? Long Live DevOps-Powered Platforms”

If DevOps and platform engineering needed a common enemy — or ally — to bond over, AI provided it. A panel featuring Nvidia, Google, Rootly and Thoughtworks explained how large language models are automating “the last mile” of toil, from incident response bots that reason over Grafana dashboards to code-gen pipelines that spit out compliant Terraform. ... The logic is straightforward: You can’t automate what you can’t see. For DevOps practitioners, high-fidelity telemetry is now table stakes — whether you’re feeding an agentic AI, debugging an ephemeral sandbox, or proving compliance to auditors. Expect platform blueprints to ship with observability baked in, not bolted on. Look at the badges behind every coffee urn and you’ll spot familiar DevOps and DevSecOps logos — GitHub Actions, Mezmo, Teleport, Cortex, Sedai, Tailscale. Many of these vendors cut their teeth in CI/CD, IaC, or shift-left security long before “platform engineering” was a LinkedIn hashtag. ... So why the funeral garb? My guess: A tongue-in-cheek jab at hype cycles. Just as “DevOps is dead” clickbait pushed us to sharpen our message, the sash was a reminder that real value — not buzzwords — keeps a movement alive. Judging by the hallway traffic and workshop queues, platform engineering is passing that test.

Daily Tech Digest - June 30, 2025


Quote for the day:

"Sheep are always looking for a new shepherd when the terrain gets rocky." -- Karen Marie Moning


The first step in modernization: Ditching technical debt

At a high level, determining when it’s time to modernize is about quantifying cost, risk, and complexity. In dollar terms, it may seem as simple as comparing the expense of maintaining legacy systems versus investing in new architecture. But the true calculation includes hidden costs, like the developer hours lost to patching outdated systems, and the opportunity cost of not being able to adapt quickly to business needs. True modernization is not a lift-and-shift — it’s a full-stack transformation. That means breaking apart monolithic applications into scalable microservices, rewriting outdated application code into modern languages, and replacing rigid relational data models with flexible, cloud-native platforms that support real-time data access, global scalability, and developer agility. Many organizations have partnered with MongoDB to achieve this kind of transformation. ... But modernization projects are usually a balancing act, and replacing everything at once can be a gargantuan task. Choosing how to tackle the problem comes down to priorities, determining where pain points exist and where the biggest impacts to the business will be. The cost of doing nothing will outrank the cost of doing something.


Is Your CISO Ready to Flee?

“A well-funded CISO with an under-resourced security team won’t be effective. The focus should be on building organizational capability, not just boosting top salaries.” While Deepwatch CISO Chad Cragle believes any CISO just in the role for the money has “already lost sight of what really matters,” he agrees that “without the right team, tools, or board access, burnout is inevitable.” Real impact, he contends, “only happens when security is valued and you’re empowered to lead.” Perhaps that stands as evidence that SMBs that want to retain their talent or attract others should treat the CISO holistically. “True professional fulfillment and long-term happiness in the CISO role stems from the opportunities for leadership, personal and professional growth, and, most importantly, the success of the cybersecurity program itself,” says Black Duck CISO Bruce Jenkins. “When cyber leaders prioritize the development and execution of a comprehensive, efficient, and effective program that delivers demonstrable value to the business, appropriate compensation typically follows as a natural consequence.” Concerns around budget constraints is that all CISOs at this point (private AND public sector) have been through zero-based budget reviews several times. If the CISO feels unsafe and unable to execute, they will be incentivized to find a safer seat with an org more prepared to invest in security programs.


AI is learning to lie, scheme, and threaten its creators

For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." ... "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around." Researchers are exploring various approaches to address these challenges.


The network is indeed trying to become the computer

Think of the scale-up networks such as the NVLink ports and NVLink Switch fabrics that are part and parcel of an GPU accelerated server node – or, these days, a rackscale system like the DGX NVL72 and its OEM and ODM clones. These memory sharing networks are vital for ever-embiggening AI training and inference workloads. As their parameter counts and token throughput requirements both rise, they need ever-larger memory domains to do their work. Throw in a mixture of expert models and the need for larger, fatter and faster scale-up networks, as they are now called, is obvious even to an AI model with only 7 billion parameters. ... Then there is the scale-out network, which is used to link nodes in distributed systems to each other to share work in a less tightly coupled way than the scale-up network affords. This is the normal networking we are familiar with in distributed HPC systems, which is normally Ethernet or InfiniBand and sometimes proprietary networks like those from Cray, SGI, Fujitsu, NEC, and others from days gone by. On top of this, we have the normal north-south networking stack that allows people to connect to systems and the east-west networks that allow distributed corporate systems running databases, web infrastructure, and other front-office systems to communicate with each other. 


What Can We Learn From History’s Most Bizarre Software Bugs?

“It’s never just one thing that causes failure in complex systems.” In risk management, this is known as the Swiss cheese model, where flaws that occur in one layer aren’t as dangerous as deeper flaws overlapping through multiple layers. And as the Boeing crash proves, “When all of them align, that’s what made it so deadly.” It is difficult to test for every scenario. After all, the more inputs you have, the more possible outputs — and “this is all assuming that your system is deterministic.” Today’s codebases are massive, with many different contributors and entire stacks of infrastructure. “From writing a piece of code locally to running it on a production server, there are a thousand things that could go wrong.” ... It was obviously a communication failure, “because NASA’s navigation team assumed everything was in metric.” But you also need to check the communication that’s happening between the two systems. “If two systems interact, make sure they agree on formats, units, and overall assumptions!” But there’s another even more important lesson to be learned. “The data had shown inconsistencies weeks before the failure,” Bajić says. “NASA had seen small navigation errors, but they weren’t fully investigated.”


Europe’s AI strategy: Smart caution or missed opportunity?

Companies in Europe are spending less on AI, cloud platforms, and data infrastructure. In high-tech sectors, productivity growth in the U.S. has far outpaced Europe. The report argues that AI could help close the gap, but only if it is used to redesign how businesses operate. Using AI to automate old processes is not enough. ... Feinberg also notes that many European companies assumed AI apps would be easier to build than traditional software, only to discover they are just as complex, if not more so. This mismatch between expectations and reality has slowed down internal projects. And the problem isn’t unique to Europe. As Oliver Rochford, CEO of Aunoo AI, points out, “AI project failure rates are generally high across the board.” He cites surveys from IBM, Gartner, and others showing that anywhere from 30 to 84 percent of AI projects fail or fall short of expectations. “The most common root causes for AI project failures are also not purely technical, but organizational, misaligned objectives, poor data governance, lack of workforce engagement, and underdeveloped change management processes. Apparently Europe has no monopoly on those.”


A Developer’s Guide to Building Scalable AI: Workflows vs Agents

Sometimes, using an agent is like replacing a microwave with a sous chef — more flexible, but also more expensive, harder to manage, and occasionally makes decisions you didn’t ask for. ... Workflows are orchestrated. You write the logic: maybe retrieve context with a vector store, call a toolchain, then use the LLM to summarize the results. Each step is explicit. It’s like a recipe. If it breaks, you know exactly where it happened — and probably how to fix it. This is what most “RAG pipelines” or prompt chains are. Controlled. Testable. Cost-predictable. The beauty? You can debug them the same way you debug any other software. Stack traces, logs, fallback logic. If the vector search fails, you catch it. If the model response is weird, you reroute it. ... Agents, on the other hand, are built around loops. The LLM gets a goal and starts reasoning about how to achieve it. It picks tools, takes actions, evaluates outcomes, and decides what to do next — all inside a recursive decision-making loop. ... You can’t just set a breakpoint and inspect the stack. The “stack” is inside the model’s context window, and the “variables” are fuzzy thoughts shaped by your prompts. When something goes wrong — and it will — you don’t get a nice red error message. 


Leveraging Credentials As Unique Identifiers: A Pragmatic Approach To NHI Inventories

Most teams struggle with defining NHIs. The canonical definition is simply "anything that is not a human," which is necessarily a wide set of concerns. NHIs manifest differently across cloud providers, container orchestrators, legacy systems, and edge deployments. A Kubernetes service account tied to a pod has distinct characteristics compared to an Azure managed identity or a Windows service account. Every team has historically managed these as separate concerns. This patchwork approach makes it nearly impossible to create a consistent policy, let alone automate governance across environments. ... Most commonly, this takes the form of secrets, which look like API keys, certificates, or tokens. These are all inherently unique and can act as cryptographic fingerprints across distributed systems. When used in this way, secrets used for authentication become traceable artifacts tied directly to the systems that generated them. This allows for a level of attribution and auditing that's difficult to achieve with traditional service accounts. For example, a short-lived token can be directly linked to a specific CI job, Git commit, or workload, allowing teams to answer not just what is acting, but why, where, and on whose behalf.


How Is AI Really Impacting Jobs In 2025?

Pessimists warn of potential mass unemployment leading to societal collapse. Optimists predict a new age of augmented working, making us more productive and freeing us to focus on creativity and human interactions. There are plenty of big-picture forecasts. One widely-cited WEF prediction claims AI will eliminate 92 million jobs while creating 170 million new, different opportunities. That doesn’t sound too bad. But what if you’ve worked for 30 years in one of the jobs that’s about to vanish and have no idea how to do any of the new ones? Today, we’re seeing headlines about jobs being lost to AI with increasing frequency. And, from my point of view, not much information about what’s being done to prepare society for this potentially colossal change. ... An exacerbating factor is that many of the roles that are threatened are entry-level, such as junior coders or designers, or low-skill, including call center workers and data entry clerks. This means there’s a danger that AI-driven redundancy will disproportionately hit economically disadvantaged groups. There’s little evidence so far that governments are prioritizing their response. There have been few clearly articulated strategies to manage the displacement of jobs or to protect vulnerable workers.


AGI vs. AAI: Grassroots Ingenuity and Frugal Innovation Will Shape the Future

One way to think of AAI is as intelligence that ships. Vernacular chatbots, offline crop-disease detectors, speech-to-text tools for courtrooms: examples of similar applications and products, tailored and designed for specific sectors, are growing fast. ... If the search for AGI is reminiscent of a cash-rich unicorn aiming for growth at all costs, then AAI is more scrappy. Like a bootstrapped startup that requires immediate profitability, it prizes tangible impact over long-term ambitions to take over the world. The aspirations—and perhaps the algorithms themselves—may be more modest. Still, the context makes them potentially transformative: if reliable and widely adopted, such systems could reach millions of users who have until now been on the margins of the digital economy. ... All this points to a potentially unexpected scenario, one in which the lessons of AI flow not along the usual contours of global geopolitics and economic power—but percolate rather upward, from the laboratories and pilot programs of the Global South toward the boardrooms and research campuses of the North. This doesn’t mean that the quest for AGI is necessarily misguided. It’s possible that AI may yet end up redefining intelligence.