Showing posts with label Sustainability. Show all posts
Showing posts with label Sustainability. Show all posts

Daily Tech Digest - September 05, 2025


Quote for the day:

"Little minds are tamed and subdued by misfortune; but great minds rise above it." -- Washington Irving


Understanding Context Engineering: Principles, Practices, and Its Distinction from Prompt Engineering

Context engineering is the strategic design, management, and delivery of relevant information—or “context”—to AI systems in order to guide, constrain, or enhance their behavior. Unlike prompt engineering, which primarily focuses on crafting effective input prompts to direct model outputs, context engineering involves curating, structuring, and governing the broader pool of information that surrounds and informs the AI’s decision-making process. In practice, context engineering requires an understanding of not only what the AI should know at a given moment but also how information should be prioritized, retrieved, and presented. It encompasses everything from assembling relevant documents and dialogue history to establishing policies for data inclusion and exclusion. ...  While there is some overlap between the two domains, context engineering and prompt engineering serve distinct purposes and employ different methodologies. Prompt engineering is concerned with the formulation of the specific text—the “prompt”—that is provided to the model as an immediate input. It is about phrasing questions, instructions, or commands in a way that elicits the desired behavior or output from the AI. Successful prompt engineering involves experimenting with wording, structure, and sometimes even formatting to maximize the performance of the language model on a given task.


How AI and Blockchain Are Transforming Tenant Verification in India

While artificial intelligence provides both intelligence and speed, Blockchain technology provides the essential foundation of trust and security. Blockchain functions as a permanent digital record – meaning that once information is set, it can’t be changed or deleted by third parties. This feature is particularly groundbreaking for ensuring a safe and clear rental history. Picture this: the rental payments and lease contracts of your tenants could all be documented as ‘smart contracts’ using Blockchain technology. ... The combination of AI and Blockchain signifies a groundbreaking transformation, enabling tenants to create ‘self-sovereign identities’ on the Blockchain — digital wallets that hold their verified credentials, which they fully control. When searching for rental properties, tenants can conveniently provide prospective landlords with access to certain details about themselves, such as their history of timely payments and police records. AI leverages secure and authentic Blockchain data to produce an immediate risk score for landlords to assess, ensuring a quick and reliable evaluation. This cohesive approach guarantees that AI outcomes are both rapid and trustworthy, while the decentralized nature of Blockchain safeguards tenant privacy by removing the necessity for central databases that may become susceptible over time.


Adversarial AI is coming for your applications

New research from Cato Networks threat intelligence report, revealed how threat actors can use a large language model jailbreak technique, known as an immersive world attack, to get AI to create infostealer malware for them: a threat intelligence researcher with absolutely no malware coding experience managed to jailbreak multiple large language models and get the AI to create a fully functional, highly dangerous, password infostealer to compromise sensitive information from the Google Chrome web browser. The end result was malicious code that successfully extracted credentials from the Google Chrome password manager. Companies that create LLMs are trying to put up guardrails, but clearly GenAI can make malware creation that much easier. AI-generated malware, including polymorphic malware, essentially makes signature-based detections nearly obsolete. Enterprises must be prepared to protect against hundreds, if not thousands, of malware variants. ... Enterprises can increase their protection by embedding security directly into applications at the build stage: this involves investing in embedded security that is mapped to OWASP controls; such as RASP, advanced Whitebox cryptography, and granular threat intelligence. IDC research shows that organizations protecting mobile apps often lack a solution to test them efficiently and effectively. 


Top Pitfalls to Avoid When Responding to Cyber Disaster

Moving too quickly following an attack can also prompt staff to respond to an intrusion without first fully understanding the type of ransomware that was used. Not all ransomware is created equal and knowing if you were a victim of locker ransomware, double extortion, ransomware-as-a-service, or another kind of attack can make all the difference in how to respond because the goal of the attacker is different for each. ... The first couple hours after a ransomware incident is identified are critical. In those immediate hours, work quickly to identify and isolate affected systems and disconnect compromised devices from the network to prevent the ransomware from spreading further. Don’t forget to also preserve forensic evidence as you go, such as screenshots, relevant logs, anything to inform future law enforcement investigations or legal action. Once that has been done, notify the key stakeholders and the cyber insurance provider. ... After the dust settles, analyze how the attack was able to occur and put in place fixes to keep it from happening again. Identify the initial access point and method, and map how the threat actor moved through the network. What barriers were they able to move past, and which held them back? Are there areas where more segmentation is needed to reduce the attack surface? Do any security workflows or policies need to be modified?


How to reclaim control over your online shopping data

“While companies often admit to sharing user data with third parties, it’s nearly impossible to track every recipient. That lack of control creates real vulnerabilities in data privacy management. Very few organizations thoroughly vet their third-party data-sharing practices, which raises accountability concerns and increases the risk of breaches,” said Ian Cohen, CEO of LOKKER. The criminal marketplace for stolen data has exploded in recent years. In 2024, over 6.8 million accounts were listed for sale, and by early 2025, nearly 2.5 million stolen accounts were available at one point. ... Even limited purchase information can prove valuable to criminals. A breach exposing high-value transactions, for example, may suggest a buyer’s financial status or lifestyle. When combined with leaked addresses, that data can help criminals identify and target individuals more precisely, whether for fraud, identity theft, or even physical theft. ... One key mechanism is the right to be forgotten, a legal principle allowing individuals to request the removal of their personal data from online platforms. The European Union’s GDPR is the strongest example of this principle in action. While not as comprehensive as the GDPR, the US has some privacy protections, such as the California Consumer Privacy Act (CCPA), which allow residents to access or delete their personal data.


Mind the Gap: Agentic AI and the Risks of Autonomy

The ink is barely dry on generative AI and AI agents, and now we have a new next big thing: agentic AI. Sounds impressive. By the time this article comes out, there’s a good chance that agentic AI will be in the rear-view mirror and we’ll all be chasing after the next new big thing. Anyone for autonomous generative agentic AI agent bots? ... Some things on the surface seem more irresponsible than others, but for some, agentic AI apparently not so much. Debugging large language models, AI agents, and agentic AI, as well as implementing guardrails are topics for another time, but it’s important to recognize that companies are handing over those car keys. Willingly. Enthusiastically. Would you put that eighth grader in charge of your marketing department? Of autonomously creating collateral that goes out to your customers without checking it first? Of course not. ... We want AI agents and agentic AI to make decisions, but we must be intentional about the decisions they are allowed to make. What are the stakes personally, professionally, or for the organization? What is the potential liability when something goes wrong? And something will go wrong. Something that you never considered going wrong will go wrong. And maybe think about the importance of the training data. Isn’t that what we say when an actual person does something wrong? “They weren’t adequately trained.” Same thing here.


How software engineers and team leaders can excel with artificial intelligence

As long as software development and AI designers continue to fall prey to the substitution myth, we’ll continue to develop systems and tools that, instead of supposedly making humans lives easier/better, will require unexpected new skills and interventions from humans that weren’t factored into the system/tool design ... Software development covers a lot of ground, from understanding requirements, architecting, designing, coding, writing tests, code review, debugging, building new skills and knowledge, and more. AI has now reached a point where it can automate or speed up almost every part of the process. This is an exciting time to be a builder. A lot of the routine, repetitive, and frankly boring parts of the job, the "cognitive grunt work", can now be handled by AI. Developers especially appreciate the help in areas like generating test cases, reviewing code, and writing documentation. When those tasks are off our plate, we can spend more time on the things that really add value: solving complex problems, designing great systems, thinking strategically, and growing our skills. ... The elephant in the room is "whether AI will take over my job one day?". Until this year, I always thought no, but the recent technological advancements and new product offerings in this space are beginning to change my mind. The reality is that we should be prepared for AI to change the software development role as we know it.


6 browser-based attacks all security teams should be ready for in 2025

Phishing tooling and infrastructure has evolved a lot in the past decade, while the changes to business IT means there are both many more vectors for phishing attack delivery, and apps and identities to target. Attackers can deliver links over instant messenger apps, social media, SMS, malicious ads, and using in-app messenger functionality, as well as sending emails directly from SaaS services to bypass email-based checks. Likewise, there are now hundreds of apps per enterprise to target, with varying levels of account security configuration. ... Like modern credential and session phishing, links to malicious pages are distributed over various delivery channels and using a variety of lures, including impersonating CAPTCHA, Cloudflare Turnstile, simulating an error loading a webpage, and many more. The variance in lure, and differences between different versions of the same lure, can make it difficult to fingerprint and detect based on visual elements alone. ... Preventing malicious OAuth grants being authorized requires tight in-app management of user permissions and tenant security settings. This is no mean feat when considering the 100s of apps in use across the modern enterprise, many of which are not centrally managed by IT and security teams


JSON Config File Leaks Azure ActiveDirectory Credentials

"The critical risk lies in the fact that this file was publicly accessible over the Internet," according to the post. "This means anyone — from opportunistic bots to advanced threat actors — could harvest the credentials and immediately leverage them for cloud account compromise, data theft, or further intrusion." ... To exploit the flaw, an attacker can first use the leaked ClientId and ClientSecret to authenticate against Azure AD using the OAuth2 Client Credentials flow to acquire an access token. Once this is acquired, the attacker then can send a GET request to the Microsoft Graph API to enumerate users within the tenant. This allows them to collect usernames and emails; build a list for password spraying or phishing; and/or identify naming conventions and internal accounts, according to the post. The attacker also can query the Microsoft Graph API to enumerate OAuth2 permission grants within the tenant, revealing which applications have been authorized and what scopes, or permissions, they hold. Finally, the acquired token allows an attacker to use group information to identify privilege clusters and business-critical teams, thus exposing organizational structure and identifying key targets for compromise, according to the post. ... "What appears to be a harmless JSON configuration file can in reality act as a master key to an organization’s cloud kingdom," according to the post.


Data centers are key to decarbonizing tech’s AI-fuelled supply chain

Data center owners and operators are uniquely positioned to step up and play a larger, more proactive role in this by pushing back on tech manufacturers in terms of the patchy emissions data they provide, while also facilitating sustainable circular IT product lifecycle management/disposal solutions for their users and customers. ... The hard truth, however, is that any data center striving to meet its own decarbonization goals and obligations cannot do so singlehandedly. It’s largely beholden to the supply chain stakeholders upstream. At the same time, their customers/users tend to accept ever shortening usage periods as the norm. Often, they overlook the benefits of achieving greater product longevity and optimal cost of ownership through the implementation of product maintenance, refurbishment, and reuse programmes. ... As a focal point for the enablement of the digital economy, data centers are ideally placed to take a much more active role: by lobbying manufacturers, educating users and customers about the necessity and benefits of changing conventional linear practices in favour of circular IT lifecycle management and recycling solutions. Such an approach will not only help decarbonize data centers themselves but the entire tech industry supply chain – by reducing emissions.

Daily Tech Digest - August 07, 2025


Quote for the day:

"Do the difficult things while they are easy and do the great things while they are small." -- Lao Tzu


Data neutrality: Safeguarding your AI’s competitive edge

“At the bottom there is a computational layer, such as the NVIDIA GPUs, anyone who provides the infrastructure for running AI. The next few layers are software-oriented, but also impacts infrastructure as well. Then there’s security and the data that feeds the models and those that feeds the applications. And on top of that, there’s the operational layer, which is how you enable data operations for AI. Data being so foundational means that whoever works with that layer is essentially holding the keys to the AI asset, so, it’s imperative that anything you do around data has to have a level of trust and data neutrality.” ... The risks in having common data infrastructure, particularly with those that are direct or indirect competitors, are significant. When proprietary training data is transplanted to another platform or service of a competitor, there is always an implicit, but frequently subtle, risk that proprietary insights, unique patterns of data or even the operational data of an enterprise will be accidentally shared. ... These trends in the market have precipitated the need for “sovereign AI platforms”– controlled spaces where companies have complete control over their data, models and the overall AI pipeline for development without outside interference.


The problem with AI agent-to-agent communication protocols

Some will say, “Competition breeds innovation.” That’s the party line. But for anyone who’s run a large IT organization, it means increased integration work, risk, cost, and vendor lock-in—all to achieve what should be the technical equivalent of exchanging a business card. Let’s not forget history. The 90s saw the rise and fall of CORBA and DCOM, each claiming to be the last word in distributed computing. The 2000s blessed us with WS-* (the asterisk is a wildcard because the number of specs was infinite), most of which are now forgotten. ... The truth: When vendors promote their own communication protocols, they build silos instead of bridges. Agents trained on one protocol can’t interact seamlessly with those speaking another dialect. Businesses end up either locking into one vendor’s standard, writing costly translation layers, or waiting for the market to move on from this round of wheel reinvention. ... We in IT love to make simple things complicated. The urge to create a universal, infinitely extensible, plug-and-play protocol is irresistible. But the real-world lesson is that 99% of enterprise agent interaction can be handled with a handful of message types: request, response, notify, error. The rest—trust negotiation, context passing, and the inevitable “unknown unknowns”—can be managed incrementally, so long as the basic messaging is interoperable.


Agents or Bots? Making Sense of AI on the Open Web

The difference between automated crawling and user-driven fetching isn't just technical—it's about who gets to access information on the open web. When Google's search engine crawls to build its index, that's different from when it fetches a webpage because you asked for a preview. Google's "user-triggered fetchers" prioritize your experience over robots.txt restrictions because these requests happen on your behalf. The same applies to AI assistants. When Perplexity fetches a webpage, it's because you asked a specific question requiring current information. The content isn't stored for training—it's used immediately to answer your question. ... An AI assistant works just like a human assistant. When you ask an AI assistant a question that requires current information, they don’t already know the answer. They look it up for you in order to complete whatever task you’ve asked. On Perplexity and all other agentic AI platforms, this happens in real-time, in response to your request, and the information is used immediately to answer your question. It's not stored in massive databases for future use, and it's not used to train AI models. User-driven agents only act when users make specific requests, and they only fetch the content needed to fulfill those requests. This is the fundamental difference between a user agent and a bot.


The Increasing Importance of Privacy-By-Design

Today’s data landscape is evolving at breakneck speed. With the explosion of IoT devices, AI-powered systems, and big data analytics, the volume and variety of personal data collected have skyrocketed. This means more opportunities for breaches, misuse, and regulatory headaches. And let’s not forget that consumers are savvier than ever about privacy risks – they want to know how their data is handled, shared, and stored. ... Integrating Privacy-By-Design into your development process doesn’t require reinventing the wheel; it simply demands a mindset shift and a commitment to building privacy into every stage of the lifecycle. From ideation to deployment, developers and product teams need to ask: How are we collecting, storing, and using data? ... Privacy teams need to work closely with developers, legal advisors, and user experience designers to ensure that privacy features do not compromise usability or performance. This balance can be challenging to achieve, especially in fast-paced development environments where deadlines are tight and product launches are prioritized. Another common challenge is educating the entire team on what Privacy-By-Design actually means in practice. It’s not enough to have a single data protection champion in the company; the entire culture needs to shift toward valuing privacy as a key product feature.


Microsoft’s real AI challenge: Moving past the prototypes

Now, you can see that with Bing Chat, Microsoft was merely repeating an old pattern. The company invested in OpenAI early, then moved to quickly launch a consumer AI product with Bing Chat. It was the first AI search engine and the first big consumer AI experience aside from ChatGPT — which was positioned more as a research project and not a consumer tool at the time. Needless to say, things didn’t pan out. Despite using the tarnished Bing name and logo that would probably make any product seem less cool, Bing Chat and its “Sydney” persona had breakout viral success. But the company scrambled after Bing Chat behaved in unpredictable ways. Microsoft’s explanation doesn’t exactly make it better: “Microsoft did not expect people to have hours-long conversations with it that would veer into personal territory,” Yusuf Mehdi, a corporate vice president at the company, told NPR. In other words, Microsoft didn’t expect people would chat with its chatbot so much. Faced with that, Microsoft started instituting limits and generally making Bing Chat both less interesting and less useful. Under current CEO Satya Nadella, Microsoft is a different company than it was under Ballmer. The past doesn’t always predict the future. But it does look like Microsoft had an early, rough prototype — yet again — and then saw competitors surpass it.


Is confusion over tech emissions measurement stifling innovation?

If sustainability is becoming a bottleneck for innovation, then businesses need to take action. If a cloud provider cannot (or will not) disclose exact emissions per workload, that is a red flag. Procurement teams need to start asking tough questions, and when appropriate, walking away from vendors that will not answer them. Businesses also need to unite to push for the development of a global measurement standard for carbon accounting. Until regulators or consortia enforce uniform reporting standards, companies will keep struggling to compare different measurements and metrics. Finally, it is imperative that businesses rethink the way they see emissions reporting. Rather than it being a compliance burden, they need to grasp it as an opportunity. Get emissions tracking right, and companies can be upfront and authentic about their green credentials, which can reassure potential customers and ultimately generate new business opportunities. Measuring environmental impact can be messy right now, but the alternative of sticking with outdated systems because new ones feel "too risky" is far worse. The solution is more transparency, smarter tools, a collective push for accountability, and above all, working with the right partners that can deliver accurate emissions statistics.


Making sense of data sovereignty and how to regain it

Although the concept of sovereignty is subject to greater regulatory control, its practical implications are often misunderstood or oversimplified, resulting in it being frequently reduced to questions of data location or legal jurisdiction. In reality, however, sovereignty extends across technical, operational and strategic domains. In practice, these elements are difficult to separate. While policy discussions often centre on where data is stored and who can access it, true sovereignty goes further. For example, much of the current debate focuses on physical infrastructure and national data residency. While these are very important issues, they represent only one part of the overall picture. Sovereignty is not achieved simply by locating data in a particular jurisdiction or switching to a domestic provider, because without visibility into how systems are built, maintained and supported, location alone offers limited protection. ... Organisations that take it seriously tend to focus less on technical purity and more on practical control. That means understanding which systems are critical to ongoing operations, where decision-making authority sits and what options exist if a provider, platform or regulation changes. Clearly, there is no single approach that suits every organisation, but these core principles help set direction. 


Beyond PQC: Building adaptive security programs for the unknown

The lack of a timeline for a post-quantum world means that it doesn’t make sense to consider post-quantum as either a long-term or a short-term risk, but both. Practically, we can prepare for the threat of quantum technology today by deploying post-quantum cryptography to protect identities and sensitive data. This year is crucial for post-quantum preparedness, as organisations are starting to put quantum-safe infrastructure in place, and regulatory bodies are beginning to address the importance of post-quantum cryptography. ... CISOs should take steps now to understand their current cryptographic estate. Many organisations have developed a fragmented cryptographic estate without a unified approach to protecting and managing keys, certificates, and protocols. This lack of visibility opens increased exposure to cybersecurity threats. Understanding this landscape is a prerequisite for migrating safely to post-quantum cryptography. Another practical step you can take is to prepare your organisation for the impact of quantum computing on public key encryption. This has become more feasible with NIST’s release of quantum-resistant algorithms and the NCSC’s recently announced three-step plan for moving to quantum-safe encryption. Even if there is no pressing threat to your business, implementing a crypto-agile strategy will also ensure a smooth transition to quantum-resistant algorithms when they become mainstream.


Critical Zero-Day Bugs Crack Open CyberArk, HashiCorp Password Vaults

"Secret management is a good thing. You just have to account for when things go badly. I think many professionals think that by vaulting a credential, their job is done. In reality, this should be just the beginning of a broader effort to build a more resilient identity infrastructure." "You want to have high fault tolerance, and failover scenarios — break-the-glass scenarios for when compromise happens. There are Gartner guides on how to do that. There's a whole market for identity and access management (IAM) integrators which sells these types of preparing for doomsday solutions," he notes. It might ring unsatisfying — a bandage for a deeper-rooted problem. It's part of the reason why, in recent years, many security experts have been asking not just how to better protect secrets, but how to move past them to other models of authorization. "I know there are going to be static secrets for a while, but they're fading away," Tal says. "We should be managing [users], rather than secrets. We should be contextualizing behaviors, evaluating the kinds of identities and machines of users that are performing actions, and then making decisions based on their behavior, not just what secrets they hold. I think that secrets are not a bad thing for now, but eventually we're going to move to the next generation of identity infrastructure."


Strategies for Robust Engineering: Automated Testing for Scalable Software

The changes happening to software development through AI and machine learning require testing to transform as well. The purpose now exceeds basic software testing because we need to create testing systems that learn and grow as autonomous entities. Software quality should be viewed through a new perspective where testing functions as an intelligent system that adapts over time instead of remaining as a collection of unchanging assertions. The future of software development will transform when engineering leaders move past traditional automated testing frameworks to create predictive AI-based test suites. The establishment of scalable engineering presents an exciting new direction that I am eager to lead. Software development teams must adopt new automated testing approaches because the time to transform their current strategies has arrived. Our testing systems should evolve from basic code verification into active improvement mechanisms. As applications become increasingly complex and dynamic, especially in distributed, cloud-native environments, test automation must keep pace. Predictive models, trained on historical failure patterns, can anticipate high-risk areas in codebases before issues emerge. Test coverage should be driven by real-time code behavior, user analytics, and system telemetry rather than static rule sets.

Daily Tech Digest - July 03, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti


The Goldilocks Theory – preparing for Q-Day ‘just right’

When it comes to quantum readiness, businesses currently have two options: Quantum key distribution (QKD) and post quantum cryptography (PQC). Of these, PQC reigns supreme. Here’s why. On the one hand, you have QKD which leverages principles of quantum physics, such as superposition, to securely distribute encryption keys. Although great in theory, it needs extensive new infrastructure, including bespoke networks and highly specialised hardware. More importantly, it also lacks authentication capabilities, severely limiting its practical utility. PQC, on the other hand, comprises classical cryptographic algorithms specifically designed to withstand quantum attacks. It can be integrated into existing digital infrastructures with minimal disruption. ... Imagine installing new quantum-safe algorithms prematurely, only to discover later they’re vulnerable, incompatible with emerging standards, or impractical at scale. This could have the opposite effect and could inadvertently increase attack surface and bring severe operational headaches, ironically becoming less secure. But delaying migration for too long also poses serious risks. Malicious actors could be already harvesting encrypted data, planning to decrypt it when quantum technology matures – so businesses protecting sensitive data such as financial records, personal details, intellectual property cannot afford indefinite delays.


Sovereign by Design: Data Control in a Borderless World

The regulatory framework for digital sovereignty is a national priority. The EU has set the pace with GDPR and GAIA-X. It prioritizes data residency and local infrastructure. China's cybersecurity law and personal information protection law enforce strict data localization. India's DPDP Act mandates local storage for sensitive data, aligning with its digital self-reliance vision through platforms such as Aadhaar. Russia's federal law No. 242-FZ requires citizen data to stay within the country for the sake of national security. Australia's privacy act focuses on data privacy, especially for health records, and Canada's PIPEDA encourages local storage for government data. Saudi Arabia's personal data protection law enforces localization for sensitive sectors, and Indonesia's personal data protection law covers all citizen-centric data. Singapore's PDPA balances privacy with global data flows, and Brazil's LGPD, mirroring the EU's GDPR, mandates the protection of privacy and fundamental rights of its citizens. ... Tech companies have little option but to comply with the growing demands of digital sovereignty. For example, Amazon Web Services has a digital sovereignty pledge, committing to "a comprehensive set of sovereignty controls and features in the cloud" without compromising performance.


Agentic AI Governance and Data Quality Management in Modern Solutions

Agentic AI governance is a framework that ensures artificial intelligence systems operate within defined ethical, legal, and technical boundaries. This governance is crucial for maintaining trust, compliance, and operational efficiency, especially in industries such as Banking, Financial Services, Insurance, and Capital Markets. In tandem with robust data quality management, Agentic AI governance can substantially enhance the reliability and effectiveness of AI-driven solutions. ... In industries such as Banking, Financial Services, Insurance, and Capital Markets, the importance of Agentic AI governance cannot be overstated. These sectors deal with vast amounts of sensitive data and require high levels of accuracy, security, and compliance. Here’s why Agentic AI governance is essential: Enhanced Trust: Proper governance fosters trust among stakeholders by ensuring AI systems are transparent, fair, and reliable. Regulatory Compliance: Adherence to legal and regulatory requirements helps avoid penalties and safeguard against legal risks. Operational Efficiency: By mitigating risks and ensuring accuracy, AI governance enhances overall operational efficiency and decision-making. Protection of Sensitive Data: Robust governance frameworks protect sensitive financial data from breaches and misuse, ensuring privacy and security. 


Fundamentals of Dimensional Data Modeling

Keeping the dimensions separate from facts makes it easier for analysts to slice-and-dice and filter data to align with the relevant context underlying a business problem. Data modelers organize these facts and descriptive dimensions into separate tables within the data warehouse, aligning them with the different subject areas and business processes. ... Dimensional modeling provides a basis for meaningful analytics gathered from a data warehouse for many reasons. Its processes lead to standardizing dimensions through presenting the data blueprint intuitively. Additionally, dimensional data modeling proves to be flexible as business needs evolve. The data warehouse updates technology according to the concept of slowly changing dimensions (SCD) as business contexts emerge. ... Alignment in the design requires these processes, and data governance plays an integral role in getting there. Once the organization is on the same page about the dimensional model’s design, it chooses the best kind of implementation. Implementation choices include the star or snowflake schema around a fact. When organizations have multiple facts and dimensions, they use a cube. A dimensional model defines how technology needs to build a data warehouse architecture or one of its components using good design and implementation.


IDE Extensions Pose Hidden Risks to Software Supply Chain

The latest research, published this week by application security vendor OX Security, reveals the hidden dangers of verified IDE extensions. While IDEs provide an array of development tools and features, there are a variety of third-party extensions that offer additional capabilities and are available in both official marketplaces and external websites. ... But OX researchers realized they could add functionality to verified extensions after the fact and still maintain the checkmark icon. After analyzing traffic for Visual Studio Code, the researchers found a server request to the marketplace that determines whether the extension is verified; they discovered they could modify the values featured in the server request and maintain the verification status even after creating malicious versions of the approved extensions. ... Using this attack technique, a threat actor could inject malicious code into verified and seemingly safe extensions that would maintain their verified status. "This can result in arbitrary code execution on developers' workstations without their knowledge, as the extension appears trusted," Siman-Tov Bustan and Zadok wrote. "Therefore, relying solely on the verified symbol of extensions is inadvisable." ... "It only takes one developer to download one of these extensions," he says. "And we're not talking about lateral movement. ..."


Business Case for Agentic AI SOC Analysts

A key driver behind the business case for agentic AI in the SOC is the acute shortage of skilled security analysts. The global cybersecurity workforce gap is now estimated at 4 million professionals, but the real bottleneck for most organizations is the scarcity of experienced analysts with the expertise to triage, investigate, and respond to modern threats. One ISC2 survey report from 2024 shows that 60% of organizations worldwide reported staff shortages significantly impacting their ability to secure the organizations, with another report from the World Economic Forum showing that just 15% of organizations believe they have the right people with the right skills to properly respond to a cybersecurity incident. Existing teams are stretched thin, often forced to prioritize which alerts to investigate and which to leave unaddressed. As previously mentioned, the flood of false positives in most SOCs means that even the most experienced analysts are too distracted by noise, increasing exposure to business-impacting incidents. Given these realities, simply adding more headcount is neither feasible nor sustainable. Instead, organizations must focus on maximizing the impact of their existing skilled staff. The AI SOC Analyst addresses this by automating routine Tier 1 tasks, filtering out noise, and surfacing the alerts that truly require human judgment. 


Microservice Madness: Debunking Myths and Exposing Pitfalls

Microservices will reduce dependencies, because it forces you to serialize your types into generic graph objects (read; JSON or XML or something similar). This implies that you can just transform your classes into a generic graph object at its interface edges, and accomplish the exact same thing. ... There are valid arguments for using message brokers, and there are valid arguments for decoupling dependencies. There are even valid points of scaling out horizontally by segregating functionality on to different servers. But if your argument in favor of using microservices is "because it eliminates dependencies," you're either crazy, corrupt through to the bone, or you have absolutely no idea what you're talking about (make your pick!) Because you can easily achieve the same amount of decoupling using Active Events and Slots, combined with a generic graph object, in-process, and it will execute 2 billion times faster in production than your "microservice solution" ... "Microservice Architecture" and "Service Oriented Architecture" (SOA) have probably caused more harm to our industry than the financial crisis in 2008 caused to our economy. And the funny thing is, the damage is ongoing because of people repeating mindless superstitious belief systems as if they were the truth.


Sustainability and social responsibility

Direct-to-chip liquid cooling delivers impressive efficiency but doesn’t manage the entire thermal load. That’s why hybrid systems that combine liquid and traditional air cooling are increasingly popular. These systems offer the ability to fine-tune energy use, reduce reliance on mechanical cooling, and optimize server performance. HiRef offers advanced cooling distribution units (CDUs) that integrate liquid-cooled servers with heat exchangers and support infrastructure like dry coolers and dedicated high-temperature chillers. This integration ensures seamless heat management regardless of local climate or load fluctuations. ... With liquid cooling systems capable of operating at higher temperatures, facilities can increasingly rely on external conditions for passive cooling. This shift not only reduces electricity usage, but also allows for significant operational cost savings over time. But this sustainable future also depends on regulatory compliance, particularly in light of the recently updated F-Gas Regulation, which took effect in March 2024. The EU regulation aims to reduce emissions of fluorinated greenhouse gases to net-zero by 2050 by phasing out harmful high-GWP refrigerants like HFCs. “The F-Gas regulation isn’t directly tailored to the data center sector,” explains Poletto.


Infrastructure Operators Leaving Control Systems Exposed

Threat intelligence firm Censys has scanned the internet twice a month for the last six months, looking for a representative sample composed of four widely used types of ICS devices publicly exposed to the internet. Overall exposure slightly increased from January through June, the firm said Monday. One of the devices Censys scanned for is programmable logic controllers made by an Israel-based Unitronics. The firm's Vision-series devices get used in numerous industries, including the water and wastewater sector. Researchers also counted publicly exposed devices built by Israel-based Orpak - a subsidiary of Gilbarco Veeder-Root - that run SiteOmat fuel station automation software. It also looked for devices made by Red Lion that are widely deployed for factory and process automation, as well as in oil and gas environments. It additionally probed for instances of a facilities automation software framework known as Niagara, made by Tridium. ... Report author Emily Austin, principal security researcher at Censys, said some fluctuation over time isn't unusual, given how "services on the internet are often ephemeral by nature." The greatest number of publicly exposed systems were in the United States, except for Unitronics, which are also widely used in Australia.


Healthcare CISOs must secure more than what’s regulated

Security must be embedded early and consistently throughout the development lifecycle, and that requires cross-functional alignment and leadership support. Without an understanding of how regulations translate into practical, actionable security controls, CISOs can struggle to achieve traction within fast-paced development environments. ... Security objectives should be mapped to these respective cycles—addressing tactical issues like vulnerability remediation during sprints, while using PI planning cycles to address larger technical and security debt. It’s also critical to position security as an enabler of business continuity and trust, rather than a blocker. Embedding security into existing workflows rather than bolting it on later builds goodwill and ensures more sustainable adoption. ... The key is intentional consolidation. We prioritize tools that serve multiple use cases and are extensible across both DevOps and security functions. For example, choosing solutions that can support infrastructure-as-code security scanning, cloud posture management, and application vulnerability detection within the same ecosystem. Standardizing tools across development and operations not only reduces overhead but also makes it easier to train teams, integrate workflows, and gain unified visibility into risk.

Daily Tech Digest - June 12, 2025


Quote for the day:

"It takes a lot of courage to show your dreams to someone else." -- Erma Bombeck


Tech Burnout: CIOs Might Be Making It Worse

“CIOs often unintentionally worsen burnout by underestimating the human toll of constant context switching, unclear priorities, and always-on availability. In the rush to stay competitive with AI-driven initiatives, teams are pushed to deliver faster without enough buffer for testing, reflection, or recovery,” Marceles adds. In the end, it’s the panic surrounding AI adoption, and not the technology itself, that’s accelerating burnout. The panic is running hot and high, surpassing anything CIOs and IT members think of as normal. “The pressure to adopt AI everywhere is real, and CIOs are feeling it from every angle -- executives, investors, competitors. But when that pressure gets passed down as back-to-back initiatives with no breathing room, it fractures the team. Engineers get pulled into AI pilots without proper training. IT staff are asked to maintain legacy systems while onboarding new automation tools. And all of it happens under the expectation that this is just “the new normal,” says Cahyo Subroto, founder of MrScraper, a data scraping tool. ... “What gets lost is the human capacity behind the tech. We don’t talk enough about how context-switching and unclear priorities drain cognitive energy. When everything is labeled critical, people lose the ability to focus. Productivity drops. Morale sinks. And burnout sets in quietly, until key people start leaving,” Subroto says.


Asset sprawl, siloed data and CloudQuery’s search for unified cloud governance

“The biggest challenge with existing tools is that they’re siloed — one for security, one for cost, one for asset inventory — making it hard to get a unified view across domains,” CQ founder Yevgeny Pats told VentureBeat. “Even simple questions like ‘What EBS volume is attached to an EC2 that is turned off? are hard to answer without stitching together multiple tools.” ... Taking a developer-first approach is critical, said Pats, because developers are ultimately the ones building, operating and securing today’s cloud infrastructure. Still, many cloud visibility tools were built for top-down governance, not for the people actually in the trenches. “When you put developers first, with accessible data, flexible APIs and native language like SQL, you empower them to move faster, catch issues earlier and build more securely,” he said. Customers are finding ways to use CloudQuery beyond asset inventory. ... “Having a fully serverless solution was an important requirement,” Hexagon cloud governance and FinOps expert Peter Figueiredo and CloudQuery director of engineering Herman Schaaf wrote in a blog post. “This decision brought lots of benefits since there is no need for time-consuming updates and virtually zero maintenance.”


Digital twins combine with AI to help manage complex systems

And it’s not just AI making digital twins better. The digital twins can also make for better AI. “We’re using digital twins to actually generate information for large language models,” says PwC’s Likens, adding that the synthetic data is of better quality when it comes from a digital twin. “We see opportunity to have the digital twins generate the missing pieces of data we need, and it’s more in line with the environment because it’s based on actual data.” A digital twin is a working model of a system, says Gareth Smith, GM of software test automation at Keysight Technologies, an electronics company. “It’ll respond in a way that mimics the expected response of the physical system.” ... Another potential use case for digital twins that might become more relevant this year is to help with understanding and scaling agentic AI systems. Agentic AI allows companies to automate complex business processes, such as solving customer problems, creating proposals, or designing, building, and testing software. The agentic AI system can be composed of multiple data sources, tools, and AI agents, all interacting in non-deterministic ways. That can be extremely powerful, but extremely dangerous. So a digital twin can monitor the behavior of an agentic system to ensure it doesn’t go off the rails, and test and simulate how the system will react to novel situations.


Will Quantum Computing Kill Bitcoin?

If a technological advance were to render these assets insecure, the consequences could be severe. Cryptocurrencies function by ensuring that only authorized parties can modify the blockchain ledger. In Bitcoin’s case, this means that only someone with the correct private key can spend a given amount of Bitcoin. ... Quantum computers, however, operate on different principles. Thanks to phenomena like superposition and entanglement, they can perform many calculations in parallel. In 1994, mathematician Peter Shor developed a quantum algorithm capable of factoring large numbers exponentially faster than classical methods. ... Could quantum computing kill Bitcoin? In theory, yes, if Bitcoin failed to adapt and quantum computers suddenly became powerful enough to break its encryption, its value would plummet. But this scenario assumes crypto stands still while quantum computing advances, which is highly unlikely. The cryptographic community is already preparing, and the financial incentives to preserve the integrity of Bitcoin are enormous. Moreover, if quantum computers become capable of breaking current encryption methods, the consequences would extend far beyond Bitcoin. Secure communications, financial transactions, digital identities, and national security all depend on encryption. In such a world, the collapse of Bitcoin would be just one of many crises.


Smaller organizations nearing cybersecurity breaking point

Small and medium enteprises (SMEs) that do have budget to hire specialists often struggle to attract and retain skilled professionals due to the lack of variation in the role. Burnout is also a growing issue for the understaffed, underqualified IT teams common in small business. “With limited resource in the business, employees are often wearing multiple hats and the pressure to manage cybersecurity on top of their regular duties can lead to fatigue, missed threats, and higher turnover,” Exelby says. ... SMEs often mistakenly believe that cyber attackers only target larger organizations, but that’s often not the case — particularly because small business partners of larger companies are often deliberately targeted as part of supply chain attacks. “Threats are becoming more advanced but their resources aren’t keeping pace,” says Kristian Torode, director and co-founder of Crystaline, a specialist in SME cybersecurity. “Many SMEs are still relying on outdated systems or don’t have dedicated security teams in place, making them an easy target.” Torode adds: “They’re also seen by cybercriminals as an exploitable link in the supply chain, since they often work with larger enterprises.” “SMEs have traditionally been low-hanging fruit — with limited resources for cybersecurity training, advanced tools, or dedicated security teams,” Adam Casey, director of cybersecurity and CISO at cloud security firm Qodea, tells CSO. 


Want fewer security fires to fight? Start with threat modeling

Some CISOs begin with one critical system or pilot project. From there, they build templates, training materials, and internal champions who help scale the practice across teams. Incorporating threat modeling into an organization’s development lifecycle doesn’t have to be daunting. In fact, it shouldn’t be, according to David Kellerman, Field CTO of Cymulate. “The key is to start small and make threat modeling approachable,” Kellerman says. Rather than rolling out a heavyweight process full of complex methodologies, CISOs should look for ways to embed threat modeling into workflows that teams already use. “I advise CISOs to embed threat modeling into existing workflows, such as architecture reviews, design discussions, or sprint planning, rather than creating separate, burdensome exercises.” This lightweight, integrated approach not only reduces resistance but helps normalize secure thinking within engineering culture. “Use simple frameworks like STRIDE or basic attacker storyboarding that non-security engineers can easily grasp,” Kellerman explains. “Make it collaborative and educational, not punitive.” As teams gain familiarity and confidence, organizations can gradually evolve their threat modeling capabilities. “The goal isn’t to build a perfect threat model on day one,” Kellerman says. “It’s to establish a security mindset that grows naturally within engineering culture.”


Rethinking Success in Security: Why Climbing the Corporate Ladder Isn’t Always the Goal

In the security field, like in many other fields, there seems to be constant pressure to advance. For whatever reason, the choice to climb the corporate ladder seems to garner far more reverence and respect than the choice to develop expertise and skills in one particular area of specialization. In other words, the decision to go higher and broader seems to be lauded more than the decision to go deeper and more focused. Yet, both are important in their own right. There are certain times in a security professional’s career when they find themselves at a crossroads – confronted by this issue. One career path is not more “correct” than another one. Which direction is the right one is an individual choice where many factors are relevant. ... It is the sad reality of the security field that we don’t show our respect and appreciation for our colleagues enough. That being said, the respect is there. See, one important thing to keep in mind is that respect is earned – not ordained or otherwise granted. If you are a great security professional, people take notice. You shouldn’t feel compelled to attain a specific title, paygrade, or otherwise just to get some respect. The dirty secret in the industry is that just because someone is in a higher-level role, it doesn’t mean that people respect them. 


The AI data center boom: Strategies for sustainable growth and risk management

Data center developers are experiencing extended long lead times for critical equipment such as generators, switchgear, power distribution units (PDUs) and cooling systems. Global shortages in semiconductors and electrical components are still impacting timelines. Additionally, uncertainty regarding tariffs is further complicating procurement and planning processes, as potential changes in trade policies could affect the cost and availability of these essential components. ... Data center owners are increasingly trying to use low-carbon materials to decarbonize both the centers and construction operations. This approach includes concrete that permanently traps carbon dioxide and steel, which is powered using renewable energy. Microsoft is now building its first data centers made with structural mass timber to slash the use of steel and concrete, which are among the most significant sources of carbon emissions. ... Fires in data centers are typically caused by a breakdown of machinery, plant or equipment. A fire that spreads quickly can result in significant financial losses and business interruption. While the structures for data centers often have concrete frames that are not significantly impacted by fires, it’s the high-value equipment that drives losses – from cooling technology to high-end computer servers or graphic card components.


Managing software projects is a double-edged sword

Doing two platform shifts in six months was beyond challenging—it was absurd. We couldn’t have hacked together a half-baked version for even one platform in that time. It was flat-out impossible. Let’s just say I was quite unhappy with this request. It was completely unreasonable. My team of developers was being asked to work evenings and weekends on a task that was guaranteed to fail. The subtle implication that we were being rebellious and dishonest was difficult to swallow. So I set about making my position clear. I tried to stay level-headed, but I’m sure that my irritation showed through. I fought hard to protect my team from a pointless death march—my time in the Navy had taught me that taking care of the team was my top job. My protestations were met with little sympathy. My boss, who like me came from the software development tool company, certainly knew that the request was unreasonable, but he told me that while it was a challenge, we just needed to “try.” This, of course, was the seed of my demise. I knew it was an impossible task, and that “trying” would fail. How do you ask your team to embark on a task that you know will fail miserably and that they know will fail miserably? Well, I answered that question very poorly.


The CIO Has Evolved. It's Time the Board Catches Up

Across industries, CIOs have risen to meet the moment. They are at the helm of transformation strategies with business peers and drive digital revenue models. They even partner with CFOs to measure value, CMOs to reimagine customer experience and COOs to build data-driven models. ... CIOs have evolved. But if boards continue to treat them as back-room managers instead of strategic partners, they are underutilizing one of the strategic roles in the enterprise. ... In today's times, every company is a technology company. AI, automation, cloud and digital platforms aren't just enablers. They form the foundation for competitive advantage and new revenue models. Similarly, cybersecurity is no longer just an IT challenge, it's a board-level fiduciary responsibility. Boards, however, dominantly engage with CIOs in a transactional manner. Issues such as budget approvals, risk reviews and project updates are common conversations. CIOs are rarely invited into conversations related to growth strategy, market reinvention or long-term capital allocation. This disconnect is proving to be a strategic liability. ... In industries where technology is the differentiator, CIOs should not be in the boardroom, they should be shaping their agenda. Because if CIOs are empowered to lead, organizations don't just avoid risk, they build resilience, relevance and reinvention.

Daily Tech Digest - June 07, 2025


Quote for the day:

"Anger doesn't solve anything; it builds nothing but it can destroy everything" -- Lawrence Douglas Wilder


Software Testing Is at a Crossroads

Organizations are discovering that achieving meaningful quality improvements requires more than technological adoption; it demands fundamental changes in processes, skills, and organizational culture that many teams are still developing. ... There are numerous bottlenecks that are preventing teams from achieving their automation targets. "The test automation gap as we call it usually stems from three key challenges: limited skills, tooling constraints, and resource shortages," Crisóstomo said. He noted that smaller teams often struggle because they don't have enough experienced or specialized staff to take on complex automation work. At the same time, even well-resourced teams run into limitations with their current tools, many of which can't handle the increasing complexity of modern testing needs. "Across the board, nearly every team we surveyed cited bandwidth as a major issue," Crisóstomo said. "It's a classic catch-22: You need time to build automation so you can save time later, but competing priorities make it hard to invest that time upfront." ... "Meanwhile, AI-enhanced quality, particularly in testing and security, hasn't seen the same level of maturity or resources," he said. "That's starting to change, but many teams still see AI as more of a novelty than a business-critical tool for QA."


Empower Users and Protect Against GenAI Data Loss

When early software as a service tool emerged, IT teams scrambled to control the unsanctioned use of cloud-based file storage applications. The answer wasn't to ban file sharing though; rather it was to offer a secure, seamless, single-sign-on alternative that matched employee expectations for convenience, usability, and speed. However, this time around the stakes are even higher. With SaaS, data leakage often means a misplaced file. With AI, it could mean inadvertently training a public model on your intellectual property with no way to delete or retrieve that data once it's gone. ... Blocking traffic without visibility is like building a fence without knowing where the property lines are. We've solved problems like these before. Zscaler's position in the traffic flow gives us an unparalleled vantage point. We see what apps are being accessed, by whom and how often. This real-time visibility is essential for assessing risk, shaping policy and enabling smarter, safer AI adoption. Next, we've evolved how we deal with policy. Lots of providers will simply give the black-and-white options of "allow" or "block." The better approach is context-aware, policy-driven governance that aligns with zero-trust principles that assume no implicit trust and demand continuous, contextual evaluation. 


Too many cloud security tools harming incident response times - survey

According to the data, security teams are inundated with an average of 4,080 alerts each month regarding potential cloud-based incidents. However, in stark contrast, respondents reported experiencing just 7 actual security incidents per year. This enormous volume of alerts - compared to the small number of real threats - creates what ARMO describes as a very low signal-to-noise ratio. The survey found that security professionals typically need to sift through approximately 7,000 alerts to find a single active thread. The excessive "tool sprawl" has been cited as a primary factor: 63% of organisations surveyed reported using more than five cloud runtime security tools, yet only 13% were able to successfully correlate alerts across these systems. ... "Over the past few years we've seen rapid growth in the adoption of cloud runtime security tools to detect and prevent active cloud attacks and yet, there's a staggering disparity between alerts and actual security incidents. Without the critical context about asset sensitivity and exploitability needed to make sense of what is happening at runtime, as well as friction between SOC and Cloud Security, teams experience major delays in incident detection and response that negatively impacts performance metrics."


Giving People the Chance to Innovate Is Critical — ADP CDO

Recognizing that not all innovations start with a fully developed use case, Venjara shares how the team created a controlled sandbox environment. This allows internal teams to experiment securely without the risks of exposure to sensitive data. This sandbox setup, developed in collaboration with security, legal, and privacy teams, provides:A controlled environment for early experimentation; Technical safeguards to protect data; A pathway from ideation to formal review and production ... Another critical pillar in Venjara’s governance strategy is infrastructure. He highlights the development of an AI gateway that centralizes access to approved models and enables comprehensive monitoring. This gateway enables the team to monitor the health and usage of AI models, track input and output data, and govern use cases effectively at scale. Reflecting on internal innovation and culture-building, Venjara shares that it all starts with people and empowering them to explore, learn, and create. A foundational part of his approach is creating space for employees to take initiative, experiment, and bring new ideas to life. This culture of experimentation is paired with a clear articulation of expectations of what success looks like and how individuals can align with the broader mission.


Fortify Your Data Defense: Balancing Data Accessibility and Privacy

Companies need our data, and they usually place it into databases or datasets they can later reference. This makes privacy tricky. Twenty years ago, common rationale followed that removing direct identifiers such as names or street addresses from a dataset meant that dataset was anonymous. Unsurprisingly, we’ve since learned there is nothing anonymous about it. Data anonymization techniques like tokenization and pseudonymization, however, can minimize data exposure while still enabling these companies to perform valuable analytics such as data matching. By ensuring the data is never seen in the clear by another human while the system associates that data with a placeholder, it offers an extra layer of protection against threat actors even if they manage to exfiltrate the data. No one system or solution is perfect, but it’s important we continuously modernize our approach. Emerging technologies like homomorphic encryption, which allows mathematical functions on encrypted data, show promise for the future. Synthetic data, which generates fictional individuals with the same characteristics as real people, is another exciting development. Some companies are involving Chief Privacy Officers in their ranks, and there are whole countries building better frameworks.


Unleashing Powerful Cloud-Native Security Techniques

By leveraging NHI management, organizations can take a significant stride towards ensuring the safety of their cloud data and applications. This approach creates a robust security shield, defending against potential breaches and data leaks. By evolving their cyber strategies to include these powerful techniques, companies can ensure they remain secure and compliant where cyber threats are increasingly sophisticated and relentless. To unlock the full potential of NHIs, it’s vital to work with a partner who understands their dynamics deeply. This partner should offer a solution that caters to the entire lifecycle of NHIs, not just one aspect. Overall, for a truly secure cloud environment, consider NHI management as a fundamental component of your cloud-native security strategy. By embracing this paradigm shift, organizations can fortify themselves against the growing wave of cyber threats, ensuring a safer, more secure cloud journey. ... With a holistic, data-driven approach to NHI management, organizations can ensure that they are well-equipped to handle ever-evolving cyber threats. By establishing and maintaining a secure cloud, they are not only safeguarding their digital assets but also setting the stage for sustainable growth in digital transformation.


Global Digital Policy Roundup: May 2025

The roundup serves as a guide for navigating global digital policy based on the work of the Digital Policy Alert. To ensure trust, every finding links to the Digital Policy Alert entry with the official government source. The full Digital Policy Alert dataset is available for you to access, filter, and download. To stay updated, Digital Policy Alert also offers a customizable notification service that provides free updates on your areas of interest. Digital Policy Alert’s tools further allow you to navigate, compare, and chat with the legal text of AI rules across the globe. ... Content moderation, including the European Commission's DSA enforcement against adult content platforms, Australia's industry codes against age-inappropriate content, China's national network identity authentication measures, and Turkey's bill to repeal the internet regulation law. AI regulation, including the European Commission's AI Act implementation guidelines, Germany's court ruling on Meta's AI training practices, and China's deep synthesis algorithm registrations. Competition policy, including the European Commission's consultation on Microsoft Teams bundling, South Korea's enforcement actions against Meta and intermediary platform operators, China's private economy promotion law, and Brazil's digital markets regulation bill. 


The Greener Code: How real-time data is powering sustainable tech in India

As engineering leaders, we build systems that scale. But we must also ask: are they scaling sustainably? India’s data centres already consume around 2% of the country’s electricity, a number that’s only growing. If we don’t rethink our infrastructure, we risk trading digital progress for environmental cost. That’s where establishing real-time data pipelines reduces the need for batch jobs, temporary file storage, and unnecessary duplication of compute resources. This translates to less wasted computing power, lower carbon emissions, and a greener digital footprint. But it’s not just about saving energy. It’s about designing systems that are smart from the start, architecting not just for performance, but for the planet. ... India is uniquely positioned. A digital-first economy with deep tech talent, rising energy needs, and a growing commitment to sustainability. If we get it right, engineering systems that are both scalable and sustainable, we don’t just solve for India, we lead the world. From Digital India to Smart Cities to Make in India, the government is pushing for innovation. But innovation without sustainability is a short-term gain. What we need is “Sustainable Innovation” — and data streaming can and in fact will be a silent hero in that journey.


Measuring What Matters: The True Impact of Platform Teams

By consolidating tools and infrastructure, companies reduce costs and enhance productivity through automation, leading to faster time-to-market for new products. Improved reliability and compliance reduce potential revenue losses resulting from outages or regulatory violations, while also supporting business growth. To truly gauge the impact of platform teams, it’s essential to look beyond traditional metrics and consider the broader changes they bring to an organization. ... As my professional coaching training taught me, truly listening — not just hearing — is crucial. It’s about understanding everyone’s perspective and connecting intuitively to the real message, including what’s not being said. This level of listening, often referred to as “Level 3” or intuitive listening, involves paying attention to all sensory components: the speaker’s tone of voice, energy level, feelings, and even the silences between words. By practicing this deep, empathetic listening, leaders can create a profound connection with their team members, uncovering motivations, concerns, and ideas that might otherwise remain hidden. This approach not only enhances team happiness but also unlocks the full potential of the platform team, leading to more innovative solutions and stronger collaboration.


The New Fraud Frontier: Why Businesses Must Rethink Identity Verification

Now that fraudsters can access AI tools, the fraud game has entirely changed. Bad actors can generate synthetic identities, manipulate biometric data and even create deepfake videos to pass KYC processes. Additionally, AI enables fraudsters to test security systems at scale, quickly iterating and adapting methods based on system responses. In light of these new threats, businesses need dynamic solutions that can learn and evolve in real time. Ironically, the same technology serving sophisticated fraud can be our most potent defence. Using AI to enhance both pre-KYC and KYC processes delivers the capability to identify complex fraud patterns, adapting faster than human-driven systems ever could. ... The battle against AI-empowered fraud isn’t just about preventing financial losses. It’s about maintaining customer trust in an increasingly sceptical digital marketplace. Every fraudulent transaction erodes confidence, and that’s a cost too high to bear in today’s competitive landscape. Businesses that take a multi-layered approach, integrating pre-KYC and KYC processes in a unified fraud prevention strategy, can stake one step ahead of fraudsters. The key is ensuring that fraud prevention tools – data-rich, AI-driven and flexible – are as adaptive as the threats they are designed to stop.

Daily Tech Digest - June 01, 2025


Quote for the day:

"You are never too old to set another goal or to dream a new dream." -- C.S. Lewis


A wake-up call for real cloud ROI

To make cloud spending work for you, the first step is to stop, assess, and plan. Do not assume the cloud will save money automatically. Establish a meticulous strategy that matches workloads to the right environments, considering both current and future needs. Take the time to analyze which applications genuinely benefit from the public cloud versus alternative options. This is essential for achieving real savings and optimal performance. ... Enterprises should rigorously review their existing usage, streamline environments, and identify optimization opportunities. Invest in cloud management platforms that can automate the discovery of inefficiencies, recommend continuous improvements, and forecast future spending patterns with greater accuracy. Optimization isn’t a one-time exercise—it must be an ongoing process, with automation and accountability as central themes. Enterprises are facing mounting pressure to justify their escalating cloud spend and recapture true business value from their investments. Without decisive action, waste will continue to erode any promised benefits. ... In the end, cloud’s potential for delivering economic and business value is real, but only for organizations willing to put in the planning, discipline, and governance that cloud demands. 


Why IT-OT convergence is a gamechanger for cybersecurity

The combination of IT and OT is a powerful one. It promises real-time visibility into industrial systems, predictive maintenance that limits downtime and data-driven decision making that gives everything from supply chain efficiency to energy usage a boost. When IT systems communicate directly with OT devices, businesses gain a unified view of operations – leading to faster problem solving, fewer breakdowns, smarter automation and better resource planning. This convergence also supports cost reduction through more accurate forecasting, optimised maintenance and the elimination of redundant technologies. And with seamless collaboration, IT and OT teams can now innovate together, breaking down silos that once slowed progress. Cybersecurity maturity is another major win. OT systems, often built without security in mind, can benefit from established IT protections like centralised monitoring, zero-trust architectures and strong access controls. Concurrently, this integration lays the foundation for Industry 4.0 – where smart factories, autonomous systems and AI-driven insights thrive on seamless IT-OT collaboration. ... The convergence of IT and OT isn’t just a tech upgrade – it’s a transformation of how we operate, secure and grow in our interconnected world. But this new frontier demands a new playbook that combines industrial knowhow with cybersecurity discipline.


How To Measure AI Efficiency and Productivity Gains

Measuring AI efficiency is a little like a "chicken or the egg" discussion, says Tim Gaus, smart manufacturing business leader at Deloitte Consulting. "A prerequisite for AI adoption is access to quality data, but data is also needed to show the adoption’s success," he advises in an online interview. ... The challenge in measuring AI efficiency depends on the type of AI and how it's ultimately used, Gaus says. Manufacturers, for example, have long used AI for predictive maintenance and quality control. "This can be easier to measure, since you can simply look at changes in breakdown or product defect frequencies," he notes. "However, for more complex AI use cases -- including using GenAI to train workers or serve as a form of knowledge retention -- it can be harder to nail down impact metrics and how they can be obtained." ... Measuring any emerging technology's impact on efficiency and productivity often takes time, but impacts are always among the top priorities for business leaders when evaluating any new technology, says Dan Spurling, senior vice president of product management at multi-cloud data platform provider Teradata. "Businesses should continue to use proven frameworks for measurement rather than create net-new frameworks," he advises in an online interview. 


The discipline we never trained for: Why spiritual quotient is the missing link in leadership

Spiritual Quotient (SQ) is the intelligence that governs how we lead from within. Unlike IQ or EQ, SQ is not about skill—it is about state. It reflects a leader’s ability to operate from deep alignment with their values, to stay centred amid volatility and to make decisions rooted in clarity rather than compulsion. It shows up in moments when the metrics don’t tell the full story, when stakeholders pull in conflicting directions. When the team is watching not just what you decide, but who you are while deciding it. It’s not about belief systems or spirituality in a religious sense; it’s about coherence between who you are, what you value, and how you lead. At its core, SQ is composed of several interwoven capacities: deep self-awareness, alignment with purpose, the ability to remain still and present amid volatility, moral discernment when the right path isn’t obvious, and the maturity to lead beyond ego. ... The workplace in 2025 is not just hybrid—it is holographic. Layers of culture, technology, generational values and business expectations now converge in real time. AI challenges what humans should do. Global disruptions challenge why businesses exist. Employees are no longer looking for charismatic heroes. They’re looking for leaders who are real, reflective and rooted.


Microsoft Confirms Password Deletion—Now Just 8 Weeks Away

The company’s solution is to first move autofill and then any form of password management to Edge. “Your saved passwords (but not your generated password history) and addresses are securely synced to your Microsoft account, and you can continue to access them and enjoy seamless autofill functionality with Microsoft Edge.” Microsoft has added an Authenticator splash screen with a “Turn on Edge” button as its ongoing campaign to switch users to its own browser continues. It’s not just with passwords, of course, there are the endless warnings and nags within Windows and even pointers within security advisories to switch to Edge for safety and security. ... Microsoft wants users to delete passwords once that’s done, so no legacy vulnerability remains, albeit Google has not gone quite that far as yet. You do need to remove SMS 2FA though, and use an app or key-based code at a minimum. ... Notwithstanding these Authenticator changes, Microsoft users should use this as a prompt to delete passwords and replace them with passkeys, per the Windows-makers’ advice. This is especially true given increasing reports of two-factor authentication (2FA) bypasses that are increasingly rendering basics forms of 2FA redundant.


Sustainable cyber risk management emerges as industrial imperative as manufacturers face mounting threats

The ability of a business to adjust, absorb, and continue operating under pressure is becoming a performance metric in and of itself. It is measured not only in uptime or safety statistics. It’s not a technical checkbox; it’s a strategic commitment that is becoming the new baseline for industrial trust and continuity. At the heart of this change lies security by design. Organizations are working to integrate security into OT environments, working their way up from system architecture to vendor procurement and lifecycle management, rather than adding protections along the way and after deployment. ... The path is made more difficult by the acute lack of OT cyber skills, which could be overcome by employing specialists and establishing long-term pipelines through internal reskilling, knowledge transfer procedures, and partnerships with universities. Building sustainable industrial cyber risk management can be made more organized using the ISA/IEC 62443 industrial cybersecurity standards. Cyber defense is now a continuous, sustainable discipline rather than an after-the-fact response thanks to these widely recognized models, which also allow industries to link risk mitigation to real industrial processes, guarantee system interoperability, and measure progress against common benchmarks.


Design Sprint vs Design Thinking: When to Use Each Framework for Maximum Impact

The Design Sprint is a structured five-day process created by Jake Knapp during his time at Google Ventures. It condenses months of work into a single workweek, allowing teams to rapidly solve challenges, create prototypes, and test ideas with real users to get clear data and insights before committing to a full-scale development effort. Unlike the more flexible Design Thinking approach, a Design Sprint follows a precise schedule with specific activities allocated to each day ...
The Design Sprint operates on the principle of "together alone" – team members work collaboratively during discussions and decision-making, but do individual work during ideation phases to ensure diverse thinking and prevent groupthink. ... Design Thinking is well-suited for broadly exploring problem spaces, particularly when the challenge is complex, ill-defined, or requires extensive user research. It excels at uncovering unmet needs and generating innovative solutions for "wicked problems" that don't have obvious answers. The Design Sprint works best when there's a specific, well-defined challenge that needs rapid resolution. It's particularly effective when a team needs to validate a concept quickly, align stakeholders around a direction, or break through decision paralysis.


Broadcom’s VMware Financial Model Is ‘Ethically Flawed’: European Report

Some of the biggest issues VMware cloud partners and customers in Europe include the company increasing prices after Broadcom axed VMware’s former perpetual licenses and pay-as-you-go monthly pricing models. Another big issue was VMware cutting its product portfolio from thousands of offerings into just a few large bundles that are only available via subscription with a multi-year minimum commitment. “The current VMware licensing model appears to rely on practices that breach EU competition regulations which, in addition to imposing harm on its customers and the European cloud ecosystem, creates a material risk for the company,” said the ECCO in its report. “Their shareholders should investigate and challenge the legality of such model.” Additionally, the ECCO said Broadcom recently made changes to its partnership program that forced partners to choose between either being a cloud service provider or a reseller. “It is common in Europe for CSP to play both [service provider and reseller] roles, thus these new requirements are a further harmful restriction on European cloud service providers’ ability to compete and serve European customers,” the ECCO report said.


Protecting Supply Chains from AI-Driven Risks in Manufacturing

Cybercriminals are notorious for exploiting AI and have set their sights on supply chains. Supply chain attacks are surging, with current analyses indicating a 70% likelihood of cybersecurity incidents stemming from supplier vulnerabilities. Additionally, Gartner projects that by the end of 2025, nearly half of all global organizations will have faced software supply chain attacks. Attackers manipulate data inputs to mislead algorithms, disrupt operations or steal proprietary information. Hackers targeting AI-enabled inventory systems can compromise demand forecasting, causing significant production disruptions and financial losses. ... Continuous validation of AI-generated data and forecasts ensures that AI systems remain reliable and accurate. The “black-box” nature of most AI products, where internal processes remain hidden, demands innovative auditing approaches to guarantee reliable outputs. Organizations should implement continuous data validation, scenario-based testing and expert human review to mitigate the risks of bias and inaccuracies. While black-box methods like functional testing offer some evaluation, they are inherently limited compared to audits of transparent systems, highlighting the importance of open AI development.


What's the State of AI Costs in 2025?

This year's report revealed that 44% of respondents plan to invest in improving AI explainability. Their goals are to increase accountability and transparency in AI systems as well as to clarify how decisions are made so that AI models are more understandable to users. Juxtaposed with uncertainty around ROI, this statistic signals further disparity between organizations' usage of AI and accurate understanding of it. ... Of the companies that use third-party platforms, over 90% reported high awareness of AI-driven revenue. That awareness empowers them to confidently compare revenue and cost, leading to very reliable ROI calculations. Conversely, companies that don't have a formal cost-tracking system have much less confidence that they can correctly determine the ROI of their AI initiatives. ... Even the best-planned AI projects can become unexpectedly expensive if organizations lack effective cost governance. This report highlights the need for companies to not merely track AI spend but optimize it via real-time visibility, cost attribution, and useful insights. Cloud-based AI tools account for almost two-thirds of AI budgets, so cloud cost optimization is essential if companies want to stop overspending. Cost is more than a metric; it's the most strategic measure of whether AI growth is sustainable. As companies implement better cost management practices and tools, they will be able to scale AI in a fiscally responsible way, confidently measure ROI, and prevent financial waste.