Daily Tech Digest - June 12, 2025


Quote for the day:

"It takes a lot of courage to show your dreams to someone else." -- Erma Bombeck


Tech Burnout: CIOs Might Be Making It Worse

“CIOs often unintentionally worsen burnout by underestimating the human toll of constant context switching, unclear priorities, and always-on availability. In the rush to stay competitive with AI-driven initiatives, teams are pushed to deliver faster without enough buffer for testing, reflection, or recovery,” Marceles adds. In the end, it’s the panic surrounding AI adoption, and not the technology itself, that’s accelerating burnout. The panic is running hot and high, surpassing anything CIOs and IT members think of as normal. “The pressure to adopt AI everywhere is real, and CIOs are feeling it from every angle -- executives, investors, competitors. But when that pressure gets passed down as back-to-back initiatives with no breathing room, it fractures the team. Engineers get pulled into AI pilots without proper training. IT staff are asked to maintain legacy systems while onboarding new automation tools. And all of it happens under the expectation that this is just “the new normal,” says Cahyo Subroto, founder of MrScraper, a data scraping tool. ... “What gets lost is the human capacity behind the tech. We don’t talk enough about how context-switching and unclear priorities drain cognitive energy. When everything is labeled critical, people lose the ability to focus. Productivity drops. Morale sinks. And burnout sets in quietly, until key people start leaving,” Subroto says.


Asset sprawl, siloed data and CloudQuery’s search for unified cloud governance

“The biggest challenge with existing tools is that they’re siloed — one for security, one for cost, one for asset inventory — making it hard to get a unified view across domains,” CQ founder Yevgeny Pats told VentureBeat. “Even simple questions like ‘What EBS volume is attached to an EC2 that is turned off? are hard to answer without stitching together multiple tools.” ... Taking a developer-first approach is critical, said Pats, because developers are ultimately the ones building, operating and securing today’s cloud infrastructure. Still, many cloud visibility tools were built for top-down governance, not for the people actually in the trenches. “When you put developers first, with accessible data, flexible APIs and native language like SQL, you empower them to move faster, catch issues earlier and build more securely,” he said. Customers are finding ways to use CloudQuery beyond asset inventory. ... “Having a fully serverless solution was an important requirement,” Hexagon cloud governance and FinOps expert Peter Figueiredo and CloudQuery director of engineering Herman Schaaf wrote in a blog post. “This decision brought lots of benefits since there is no need for time-consuming updates and virtually zero maintenance.”


Digital twins combine with AI to help manage complex systems

And it’s not just AI making digital twins better. The digital twins can also make for better AI. “We’re using digital twins to actually generate information for large language models,” says PwC’s Likens, adding that the synthetic data is of better quality when it comes from a digital twin. “We see opportunity to have the digital twins generate the missing pieces of data we need, and it’s more in line with the environment because it’s based on actual data.” A digital twin is a working model of a system, says Gareth Smith, GM of software test automation at Keysight Technologies, an electronics company. “It’ll respond in a way that mimics the expected response of the physical system.” ... Another potential use case for digital twins that might become more relevant this year is to help with understanding and scaling agentic AI systems. Agentic AI allows companies to automate complex business processes, such as solving customer problems, creating proposals, or designing, building, and testing software. The agentic AI system can be composed of multiple data sources, tools, and AI agents, all interacting in non-deterministic ways. That can be extremely powerful, but extremely dangerous. So a digital twin can monitor the behavior of an agentic system to ensure it doesn’t go off the rails, and test and simulate how the system will react to novel situations.


Will Quantum Computing Kill Bitcoin?

If a technological advance were to render these assets insecure, the consequences could be severe. Cryptocurrencies function by ensuring that only authorized parties can modify the blockchain ledger. In Bitcoin’s case, this means that only someone with the correct private key can spend a given amount of Bitcoin. ... Quantum computers, however, operate on different principles. Thanks to phenomena like superposition and entanglement, they can perform many calculations in parallel. In 1994, mathematician Peter Shor developed a quantum algorithm capable of factoring large numbers exponentially faster than classical methods. ... Could quantum computing kill Bitcoin? In theory, yes, if Bitcoin failed to adapt and quantum computers suddenly became powerful enough to break its encryption, its value would plummet. But this scenario assumes crypto stands still while quantum computing advances, which is highly unlikely. The cryptographic community is already preparing, and the financial incentives to preserve the integrity of Bitcoin are enormous. Moreover, if quantum computers become capable of breaking current encryption methods, the consequences would extend far beyond Bitcoin. Secure communications, financial transactions, digital identities, and national security all depend on encryption. In such a world, the collapse of Bitcoin would be just one of many crises.


Smaller organizations nearing cybersecurity breaking point

Small and medium enteprises (SMEs) that do have budget to hire specialists often struggle to attract and retain skilled professionals due to the lack of variation in the role. Burnout is also a growing issue for the understaffed, underqualified IT teams common in small business. “With limited resource in the business, employees are often wearing multiple hats and the pressure to manage cybersecurity on top of their regular duties can lead to fatigue, missed threats, and higher turnover,” Exelby says. ... SMEs often mistakenly believe that cyber attackers only target larger organizations, but that’s often not the case — particularly because small business partners of larger companies are often deliberately targeted as part of supply chain attacks. “Threats are becoming more advanced but their resources aren’t keeping pace,” says Kristian Torode, director and co-founder of Crystaline, a specialist in SME cybersecurity. “Many SMEs are still relying on outdated systems or don’t have dedicated security teams in place, making them an easy target.” Torode adds: “They’re also seen by cybercriminals as an exploitable link in the supply chain, since they often work with larger enterprises.” “SMEs have traditionally been low-hanging fruit — with limited resources for cybersecurity training, advanced tools, or dedicated security teams,” Adam Casey, director of cybersecurity and CISO at cloud security firm Qodea, tells CSO. 


Want fewer security fires to fight? Start with threat modeling

Some CISOs begin with one critical system or pilot project. From there, they build templates, training materials, and internal champions who help scale the practice across teams. Incorporating threat modeling into an organization’s development lifecycle doesn’t have to be daunting. In fact, it shouldn’t be, according to David Kellerman, Field CTO of Cymulate. “The key is to start small and make threat modeling approachable,” Kellerman says. Rather than rolling out a heavyweight process full of complex methodologies, CISOs should look for ways to embed threat modeling into workflows that teams already use. “I advise CISOs to embed threat modeling into existing workflows, such as architecture reviews, design discussions, or sprint planning, rather than creating separate, burdensome exercises.” This lightweight, integrated approach not only reduces resistance but helps normalize secure thinking within engineering culture. “Use simple frameworks like STRIDE or basic attacker storyboarding that non-security engineers can easily grasp,” Kellerman explains. “Make it collaborative and educational, not punitive.” As teams gain familiarity and confidence, organizations can gradually evolve their threat modeling capabilities. “The goal isn’t to build a perfect threat model on day one,” Kellerman says. “It’s to establish a security mindset that grows naturally within engineering culture.”


Rethinking Success in Security: Why Climbing the Corporate Ladder Isn’t Always the Goal

In the security field, like in many other fields, there seems to be constant pressure to advance. For whatever reason, the choice to climb the corporate ladder seems to garner far more reverence and respect than the choice to develop expertise and skills in one particular area of specialization. In other words, the decision to go higher and broader seems to be lauded more than the decision to go deeper and more focused. Yet, both are important in their own right. There are certain times in a security professional’s career when they find themselves at a crossroads – confronted by this issue. One career path is not more “correct” than another one. Which direction is the right one is an individual choice where many factors are relevant. ... It is the sad reality of the security field that we don’t show our respect and appreciation for our colleagues enough. That being said, the respect is there. See, one important thing to keep in mind is that respect is earned – not ordained or otherwise granted. If you are a great security professional, people take notice. You shouldn’t feel compelled to attain a specific title, paygrade, or otherwise just to get some respect. The dirty secret in the industry is that just because someone is in a higher-level role, it doesn’t mean that people respect them. 


The AI data center boom: Strategies for sustainable growth and risk management

Data center developers are experiencing extended long lead times for critical equipment such as generators, switchgear, power distribution units (PDUs) and cooling systems. Global shortages in semiconductors and electrical components are still impacting timelines. Additionally, uncertainty regarding tariffs is further complicating procurement and planning processes, as potential changes in trade policies could affect the cost and availability of these essential components. ... Data center owners are increasingly trying to use low-carbon materials to decarbonize both the centers and construction operations. This approach includes concrete that permanently traps carbon dioxide and steel, which is powered using renewable energy. Microsoft is now building its first data centers made with structural mass timber to slash the use of steel and concrete, which are among the most significant sources of carbon emissions. ... Fires in data centers are typically caused by a breakdown of machinery, plant or equipment. A fire that spreads quickly can result in significant financial losses and business interruption. While the structures for data centers often have concrete frames that are not significantly impacted by fires, it’s the high-value equipment that drives losses – from cooling technology to high-end computer servers or graphic card components.


Managing software projects is a double-edged sword

Doing two platform shifts in six months was beyond challenging—it was absurd. We couldn’t have hacked together a half-baked version for even one platform in that time. It was flat-out impossible. Let’s just say I was quite unhappy with this request. It was completely unreasonable. My team of developers was being asked to work evenings and weekends on a task that was guaranteed to fail. The subtle implication that we were being rebellious and dishonest was difficult to swallow. So I set about making my position clear. I tried to stay level-headed, but I’m sure that my irritation showed through. I fought hard to protect my team from a pointless death march—my time in the Navy had taught me that taking care of the team was my top job. My protestations were met with little sympathy. My boss, who like me came from the software development tool company, certainly knew that the request was unreasonable, but he told me that while it was a challenge, we just needed to “try.” This, of course, was the seed of my demise. I knew it was an impossible task, and that “trying” would fail. How do you ask your team to embark on a task that you know will fail miserably and that they know will fail miserably? Well, I answered that question very poorly.


The CIO Has Evolved. It's Time the Board Catches Up

Across industries, CIOs have risen to meet the moment. They are at the helm of transformation strategies with business peers and drive digital revenue models. They even partner with CFOs to measure value, CMOs to reimagine customer experience and COOs to build data-driven models. ... CIOs have evolved. But if boards continue to treat them as back-room managers instead of strategic partners, they are underutilizing one of the strategic roles in the enterprise. ... In today's times, every company is a technology company. AI, automation, cloud and digital platforms aren't just enablers. They form the foundation for competitive advantage and new revenue models. Similarly, cybersecurity is no longer just an IT challenge, it's a board-level fiduciary responsibility. Boards, however, dominantly engage with CIOs in a transactional manner. Issues such as budget approvals, risk reviews and project updates are common conversations. CIOs are rarely invited into conversations related to growth strategy, market reinvention or long-term capital allocation. This disconnect is proving to be a strategic liability. ... In industries where technology is the differentiator, CIOs should not be in the boardroom, they should be shaping their agenda. Because if CIOs are empowered to lead, organizations don't just avoid risk, they build resilience, relevance and reinvention.

Daily Tech Digest - June 11, 2025


Quote for the day:

"The key to success is to focus on goals, not obstacles." -- Unknown



The future of RPA ties to AI agents

“Unlike RPA bots, that follow predefined rules, AI agents are learning from data, making decisions, and adapting to changing business logic,” Khan says. “AI agents are being used for more flexible tasks such as customer interactions, fraud detection, and predictive analytics.” Kahn sees RPA’s role shifting in the next three to five years, as AI agents become more prevalent. Many organizations will embrace hyperautomation, which uses multiple technologies, including RPA and AI, to automate business processes. “Use cases for RPA most likely will be integrated into broader AI-powered workflows instead of functioning as standalone solutions,” he says. ... “RPA isn’t dying — it’s evolving,” he says. “We’ve tested various AI solutions for process automation, but when you need something to work the same way every single time —without exceptions, without interpretations — RPA remains unmatched.” Radich and other automation experts see AI agents eventually controlling RPA bots, with various robotic processes in a toolbox for agents to choose from. “Today, we build separate RPA workflows for different scenarios,” Radich says. “Tomorrow, with our agentic capabilities, an agent will evaluate an incoming request and determine whether it needs RPA for data processing, API calls for system integration, or human handoff for complex decisions.”


The path to better cybersecurity isn’t more data, it’s less noise

SOCs deal with tens of thousands of alerts every day. It’s more than any person can realistically keep up with. When too much data comes in at once, things get missed. Responses slow down and, over time, the constant pressure can lead to burnout. ... The trick is to start spotting patterns. Look at what helped in past investigations. Was it a login from an odd location? An admin running commands they normally don’t? A device suddenly reaching out to strange domains? These are the kinds of details that stand out once you understand what typical system behavior looks like. At first, you won’t. That’s okay. Spend time reading through old incident reports. Watch how the team reacts to real alerts. Learn which ones actually spark investigations and which ones get dismissed without a second glance. ... Start by removing logs and alerts that don’t add value. Many logs are never looked at because they don’t contain useful information. Logs showing every successful login might not help if those logins are normal. Some logs repeat the same information, like system status messages. ... Next, think about how long to keep different types of logs. Not all logs need to be saved for the same amount of time. Network traffic logs might only be useful for a few days because threats usually show up quickly. 


The EU challenges Google and Cloudflare with its very own DNS resolver that can filter dangerous traffic

The DNS4EU wants to be an alternative to major US-based public DNS services (like Google and Cloudflare) to boost the EU's digital autonomy by reducing European reliance on foreign infrastructure. This isn't only an EU-developed DNS, though. The DNS4EU comes with built-in filters against malicious domains, like those hosting malware, phishing, or other cybersecurity threats. The home user version also includes the possibility to block ads and/or adult content. ... The DNS4EU, which the EU ensures "will not be forced on anyone," has been developed to meet different users' needs. The home users' version is a public and free DNS resolver that comes with the option to add filters to block ads, malware, adult content, or all of these, or none. There's also a dedicated version for government entities and telecom providers that operate within the European Union. As mentioned earlier, the DNS4EU comes with a built-in filter to block dangerous traffic alongside the ability to provide regional threat intelligence. This means that a malicious threat discovered in one country could be blocked simultaneously across several regions and countries, de facto halting its spread. ... The Senior Director for European Government and Regulatory Affairs at the Internet Society, David Frautschy Heredia, also warns against potential risks related to content filtering, arguing that "safeguards should be developed to prevent abuse."


AgenticOps: How Cisco is Rewiring Network Operations for the AI Age

AI Canvas is where AgenticOps comes to life. It’s the industry’s first generative UI built for cross-domain IT operations, unifying NetOps, SecOps, IT, and executives into one collaborative environment. Powered by real-time telemetry from Meraki, ThousandEyes, Splunk, and more, AI Canvas brings together data from across the stack into one intelligent, always-on view. But this isn’t just visibility. It’s AI already operating. When a service issue hits, AI Canvas pulls in the right data, connects the dots, and surfaces a live picture of what matters—before anyone even asks. Every session starts with context, whether launched by AI or by an IT engineer. Embedded into the AI Canvas is the Cisco AI Assistant, your interface to the agentic system. Ask a question in natural language. Dig into root cause. Explore options. The AI Assistant guides you through diagnostics, decisions, and actions, all grounded in live telemetry. And when you’re ready to share, just drag your findings into AI Canvas. From there, with one click you can invite collaborators—and that’s when the canvas comes fully alive. Every insight becomes part of a shared investigation with AI Canvas actively thinking, collaborating, and evolving the UI at every step. But it doesn’t stop at diagnosis—AI Canvas acts. It applies changes, monitors impact and share outcomes in real time.


8 things CISOs have learned from cyber incidents

Brown believes there are often important lessons that come out of breaches, whether it’s high-profile ones that end up in textbooks and university courses, or experiences that can be shared among peers through conference panels and other events. “Always look for good to come from events. How can you help the industry forward? Can you help the CISO community?” he says. ... Many incident-hardened CISOs will shift their approach and their mindset about experiencing an attack first-hand. “You’ll develop an attack-minded perspective, where you want to understand your attack surface better than your adversary, and apply your resources accordingly to insulate against risk,” says Cory Michel, VP security and IT at AppOmni, who’s been on several incident response teams. In practice, shifting from defense to offence means preparing for different types of incidents, be it platform abuse, exploitation or APTs, and tailoring responses. ... The playbook needs clear guidance on communication, during and after an incident, because this can be overlooked while dealing with the crisis, but in the end, it may come to define the lasting impact of a breach that becomes common knowledge. “Every word matters during a crisis,” says Brown. “Of what you publish, what you say, how you say it. So, it’s very important to be prepared for that.”


The five security principles driving open source security apps at scale

Open-source AI’s ability to act as an innovation catalyst is proven. What is unknown is the downside or the paradox that’s being created with the all-out focus on performance and the ubiquity of platform development and support. At the center of the paradox for every company building with open-source AI is the need to keep it open to fuel innovation, yet gain control over security vulnerabilities and the complexity of compliance. ... Regulatory compliance is becoming more complex and expensive, further fueling the paradox. Startup founders, however, tell VentureBeat that the high costs of compliance can be offset by the data their systems generate. They’re quick to point out that they do not intend to deliver governance, risk, and compliance (GRC) solutions; however, their apps and platforms are meeting the needs of enterprises in this area, especially across Europe. ... “EU AI Act, for example, is starting its enforcement in February, and the pace of enforcement and fines is much higher and aggressive than GDPR. From our perspective, we want to help organizations navigate those frameworks, ensuring they’re aware of the tools available to leverage AI safely and map them to risk levels dictated by the Act.”


What We Wish We Knew About Container Security

Each container maps to a process ID in Linux. The illusion of separation is created using kernel namespaces. These namespaces hide resources like filesystems, network interfaces and process trees. But the kernel remains shared. That shared kernel becomes the attack surface. And in the event of a container escape, that attack surface becomes a liability. Common attack vectors include exploiting filesystem mounts, abusing symbolic links or leveraging misconfigured privileges. These exploits often target the host itself. Once inside the kernel, an attacker can affect other containers or the infrastructure that supports them. This is not just theoretical. Container escapes happen, and when they do, everything on that node becomes suspect. ... Virtual machines fell out of favor because of performance overhead and slow startup times. But many of those drawbacks have since been addressed. Projects leveraging paravirtualization, for example, now offer performance comparable to containers while restoring strong workload isolation. Paravirtualization modifies the guest OS to interact efficiently with the hypervisor. It eliminates the need to emulate hardware, reducing latency and improving resource usage. Several open source projects have explored this space, demonstrating that it’s possible to run containers within lightweight virtual machines. 


The unseen risks of cloud data sharing and how companies can safeguard intellectual property

For many technology-driven sectors, intellectual property lies at their core. This is particular to the fields of software development, pharmaceuticals, and design innovation. For companies in these fields, IP theft can have serious consequences. Unfortunately, cybercriminals increasingly target valuable IP because it can be sold or used to undermine the original creators. According to the Verizon 2025 Data Breach Investigation Report, nearly 97 per cent of these attacks in the Asia-Pacific region are fuelled by social engineering, system intrusion and web app attacks. This alarming trend highlights the urgent need for stronger data protection measures. ... While cloud platforms present unique challenges for securing IP, they also offer some potential solutions. One of the most effective ways to protect data is through encryption. Encrypting files before they are uploaded to the cloud ensures that even if unauthorised access is gained, the data remains unreadable without the proper decryption key. For organisations that rely on cloud platforms for collaboration, file-level encryption is crucial. This form of encryption ensures that sensitive data is protected not just at rest but throughout its entire lifecycle in the cloud. Many cloud platforms offer built-in encryption tools, but companies can also implement third-party solutions to enhance the protection of their intellectual property.


The Critical Role of a Data Pipeline in Security

By implementing a data pipeline and prioritizing the optimization and reduction of data volume before it reaches the SIEM, organizations can stay on budget and still ensure that all necessary data can be thoroughly examined. Data pipelines also lead to tangible reductions in both storage and processing expenses. ... The decrease in the sheer volume of data that the SIEM must handle directly can significantly reduce the total cost of SIEM operations. In addition to volume reduction, data pipelines improve the quality of data delivered to SIEMs and other tools — filtering out repetitive noise and enriching logs for faster queries, increased relevance, and prioritization of the most critical security events. Data pipelines also introduce efficiency by automating the collection, processing, and routing of data. By reducing alert fatigue through intelligent anomaly detection and prioritization, data pipelines can significantly speed up incident resolution times. Beyond immediate threat detection and cost savings, data pipelines also aid in maintaining compliance with privacy regulations like GDPR, CCPA, and PCI. They help provide clear data lineage, making it easier to track the origin and transformations of data. 


Why you need diverse third-party data to deliver trusted AI solutions

Data diversity refers to the variety and representation of different attributes, groups, conditions, or contexts within a dataset. It ensures that the dataset reflects the real-world variability in the population or phenomenon being studied. The diversity of your data helps ensure that the insights, predictions, and decisions derived from it are fair, accurate, and generalizable. ... Before you start your data analysis, it’s important to understand what you want to do with your data. A keen understanding of your use cases and data applications can help identify gaps and hypotheses you need to work to solve. It also gives you a method for seeking the data that fits your specific use case. In the same way, starting with a clear question provides direction, focus, and purpose to the whole process of text data analysis. Without one, you’ll inevitably gather irrelevant data, overlook key variables, or find yourself looking at a dataset that’s irrelevant to what you actually want to know. ... When certain voices, topics, or customer segments are over- or underrepresented in the data, models trained on that data may produce skewed results: misunderstanding user needs, overlooking key issues, or favoring one group over another. This can result in poor customer experiences, ineffective personalization efforts, and biased decision-making. 

Daily Tech Digest - June 10, 2025


Quote for the day:

"Life is not about finding yourself. Life is about creating yourself." -- Lolly Daskal


AI Is Making Cybercrime Quieter and Quicker

The rise of AI-enabled cybercrime is no longer theoretical. Nearly 72% of organisations In India said that they have encountered AI-powered cyber threats in the past year. These threats are scaling fast, with a 2X increase reported by 70% and a 3X increase by 12% of organisations. This new class of AI-powered threats are harder to detect and often exploit weaknesses in human behaviour, misconfigurations, and identity systems. In India, the top AI-driven threats reported include AI-assisted credential stuffing and brute force attacks, Deepfake impersonation in business email compromise (BEC), AI-powered malware (Polymorphic malware), Automated reconnaissance of attack surfaces, and AI-generated phishing emails. ... The most disruptive threats are no longer the most obvious. Topping the list are unpatched and zero-day exploits, followed closely by insider threats, cloud misconfigurations, software supply chain attacks, and human error. These threats are particularly damaging because they often go undetected by traditional defences, exploiting internal weaknesses and visibility gaps. As a result, these quieter, more complex risks are now viewed as more dangerous than well-known threats like ransomware or phishing. Traditional threats such as phishing and malware are still growing at a rate of ~10%, but this is comparatively modest —likely due to mature defences like endpoint protection and awareness training.


The Evolution and Future of the Relationship Between Business and IT

IT professionals increasingly serve as translators — converting executive goals into technical requirements, and turning technical realities into actionable business decisions. This fusion of roles has also led to the rise of cross-functional “fusion teams,” where IT and business units co-own projects from ideation through execution. ... Artificial Intelligence is already influencing how decisions are made and systems are managed. From intelligent automation to predictive analytics, AI is redefining productivity. According to a PwC report, AI is expected to contribute over $15 trillion to the global economy by 2030 — and IT organizations will play a pivotal role in enabling this transformation. At the same time, the lines between IT and the business will continue to blur. Platforms like low-code development tools, AI copilots, and intelligent data fabrics will empower business users to create solutions without traditional IT support — requiring IT teams to pivot further into governance, enablement, and strategy. Security, compliance, and data privacy will become even more important as businesses operate across fragmented and federated environments. ... The business-IT relationship has evolved from one rooted in infrastructure ownership to one centered on service integration, strategic alignment, and value delivery. IT is no longer just the department that runs servers or writes code — it’s the nervous system that connects capabilities, ensures reliability, and enables growth.


Can regulators trust black-box algorithms to enforce financial fairness?

Regulators, in their attempt to maintain oversight and comparability, often opt for rules-based regulation, said DiRollo. These are prescriptive, detailed requirements intended to eliminate ambiguity. However, this approach unintentionally creates a disproportionate burden on smaller institutions, he continued, DiRollo said, “Each bank must effectively build its own data architecture to interpret and implement regulatory requirements. For instance, calculating Risk-Weighted Assets (RWAs) demands banks to collate data across a myriad of systems, map this data into a bespoke regulatory model, apply overlays and assumptions to reflect the intent of the rule and interpret evolving guidance and submit reports accordingly.” ... Secondly around regulatory arbitrage. In this area, larger institutions with more sophisticated modelling capabilities can structure their portfolios or data in ways that reduce regulatory burdens without a corresponding reduction in actual risk. “The implication is stark: the fairness that regulators seek to enforce is undermined by the very framework designed to ensure it,” said DiRollo. While institutions pour effort into interpreting rules and submitting reports, the focus drifts from identifying and managing real risks. In practice, compliance becomes a proxy for safety – a dangerous assumption, in the words of DiRollo.


The legal questions to ask when your systems go dark

Legal should assume the worst and lean into their natural legal pessimism. There’s very little time to react, and it’s better to overreact than underreact (or not react at all). The legal context around cyber incidents is broad, but assume the worst-case scenario like a massive data breach. If that turns out to be wrong, even better! ... Even if your organization has a detailed incident response plan, chances are no one’s ever read it and that there will be people claiming “that’s not my job.” Don’t get caught up in that. Be the one who brings together management, IT, PR, and legal at the same table, and coordinate efforts from the legal perspective. ... If that means “my DPO will check the ROPA” – congrats! But if your processes are still a work in progress, you’re likely about to run a rapid, ad hoc data inventory: involving all departments, identifying data types, locations, and access controls. Yes, it will all be happening while systems are down and everyone’s panicking. But hey – serenity now, emotional damage later. You literally went to law school for this. ... You, as in-house or external legal support, really have to understand the organization and how its tech workflows actually function. I dream of a world where lawyers finally stop saying “we’ll just do the legal stuff,” because “legal stuff” remains abstract and therefore ineffective if you don’t put it in the context of a particular organization.


New Quantum Algorithm Factors Numbers With One Qubit

Ultimately, the new approach works because of how it encodes information. Classical computers use bits, which can take one of two values. Qubits, the quantum equivalent, can take on multiple values, because of the vagaries of quantum mechanics. But even qubits, once measured, can take on only one of two values, a 0 or a 1. But that’s not the only way to encode data in quantum devices, say Robert König and Lukas Brenner of the Technical University of Munich. Their work focuses on ways to encode information with continuous variables, meaning they can take on any values in a given range, instead of just certain ones. ... In the past, researchers have tried to improve on Shor’s algorithm for factoring by simulating a qubit using a continuous system, with its expanded set of possible values. But even if your system computes with continuous qubits, it will still need a lot of them to factor numbers, and it won’t necessarily go any faster. “We were wondering whether there’s a better way of using continuous variable systems,” König said. They decided to go back to basics. The secret to Shor’s algorithm is that it uses the number it’s factoring to generate what researchers call a periodic function, which has repeating values at regular intervals. Then it uses a mathematical tool called a quantum Fourier transform to identify the value of that period — how long it takes for the function to repeat.


What Are Large Action Models?

LAMs are LLMs trained on specific actions and enhanced with real connectivity to external data and systems. This makes the agents they power more robust than basic LLMs, which are limited to reasoning, retrieval and text generation. Whereas LLMs are more general-purpose, trained on a large data corpus, LAMs are more task-oriented. “LAMs fine-tune an LLM to specifically be good at recommending actions to complete a goal,” Jason Fournier, vice president of AI initiatives at the education platform Imagine Learning, told The New Stack. ... LAMs trained on internal actions could streamline industry-specific workflows as well. Imagine Learning, for instance, has developed a curriculum-informed AI framework to support teachers and students with AI-powered lesson planning. Fournier sees promise in automating administrative tasks like student registration, synthesizing data for educators and enhancing the learning experience. Or, Willson said, consider marketing: “You could tell an agentic AI platform with LAM technology, ‘Launch our new product campaign for the ACME software across all our channels with our standard messaging framework.'” Capabilities like this could save time, ensure brand consistency, and free teams to focus on high-level strategy.


Five mistakes companies make when retiring IT equipment: And how to avoid them

Outdated or unused IT assets often sit idle in storage closets, server rooms, or even employee homes for extended periods. This delay in decommissioning can create a host of problems. Unsecured, unused devices are prime targets for data breaches, theft, or accidental loss. Additionally, without a timely and consistent retirement process, organizations lose visibility into asset status, which can create confusion, non-compliance, or unnecessary costs. The best way to address this is by implementing in-house destruction solutions as an integrated part of the IT lifecycle. Rather than relying on external vendors or waiting until large volumes of devices pile up, organizations can equip themselves with high security data destruction machinery – such as hard drive shredders, degaussers, crushers, or disintegrators – designed to render data irretrievable on demand. This allows for immediate, on-site sanitization and physical destruction as soon as devices are decommissioned. Not only does this improve data control and reduce risk exposure, but it also simplifies chain-of-custody tracking by eliminating unnecessary handoffs. With in-house destruction capabilities, organizations can securely retire equipment at the pace their operations demand – no waiting, no outsourcing, and no compromise.


Event Sourcing Unpacked: The What, Why, and How

Event Sourcing offers significant benefits for systems that require persistent audit trails, rich debugging capabilities with event replay. It is especially effective in domains like finance, healthcare, e-commerce, and IoT, where every transaction or state change is critical and must be traceable. However, its complexity means that it isn’t ideal for every scenario. For applications that primarily engage in basic CRUD operations or demand immediate consistency, the overhead of managing an ever-growing event log, handling event schema evolution, and coping with eventual consistency can outweigh the benefits. In such cases, simpler persistence models may be more appropriate. When compared with related patterns, Event Sourcing naturally complements CQRS by decoupling read and write operations, and it enhances Domain-Driven Design by providing a historical record of domain events. Additionally, it underpins Event-Driven Architectures by facilitating loosely coupled, scalable communication. The decision to implement Event Sourcing should therefore balance its powerful capabilities against the operational and developmental complexities it introduces, ensuring it aligns with the project’s specific needs and long-term architectural goals.


Using Traffic Mirroring to Debug and Test Microservices in Production-Like Environments

At its core, traffic mirroring duplicates incoming requests so that, while one copy is served by the primary service, the other is sent to an identical service running in a test or staging environment. The response from the mirrored service is never returned to the client; it exists solely to let engineers observe, compare, or process data from real-world usage. ... Real-world traffic is messy. Certain bugs only appear when a request contains a specific sequence of API calls or unexpected data patterns. By mirroring production traffic to a shadow service, developers can catch these hard-to-reproduce errors in a controlled environment. ... Mirroring production traffic allows teams to observe how a new service version handles the same load as its predecessor. This testing is particularly useful for identifying regressions in response time or resource utilization. Teams can compare metrics like CPU usage, memory consumption, and request latency between the primary and shadow services to determine whether code changes negatively affect performance. Before rolling out a new feature, developers must ensure it works correctly under production conditions. Traffic mirroring lets a new microservice version be deployed with feature flags while still serving requests from the stable version.


Don’t be a victim of high cloud costs

The simplest reason for the rising expenses associated with cloud services is that major cloud service providers consistently increase their prices. Although competition among these providers helps keep prices stable to some extent, businesses now face inflation, the introduction of new premium services, and the complex nature of pricing models, which are often shrouded in mystery. All these factors complicate cost management. Meanwhile, many businesses have inefficient usage patterns. The typical approach to adoption involves migrating existing systems to the cloud without modifying or improving their functions for cloud environments. This “lift and shift” shortcut often leads to inefficient resource allocation and unnecessary expenses. ... First, before embracing cloud technology for its advantages, companies should develop a well-defined plan that outlines the rationale, objectives, and approach to using cloud services. Identify which tasks are suitable for cloud deployment and which are not, and assess whether a public, private, or hybrid cloud setup aligns with your business and budget objectives. Second, before transferring data, ensure that you optimize your tasks to improve efficiency and performance. Please resist the urge to move existing systems to the cloud in their current state. ... Third, effectively managing cloud expenses relies on implementing strong governance practices.

Daily Tech Digest - June 09, 2025


Quote for the day:

"Motivation gets you going and habit gets you there." -- Zig Ziglar


Architecting Human-AI Relationships: Governance Frameworks for Emotional AI Integration

The talent retention implications prove equally compelling, particularly as organizations compete for digitally native workforce demographics who view AI collaboration as a natural extension of professional relationships. ... Perhaps most significantly, healthy human-AI collaboration frameworks unleash innovation potential that traditional technology deployment approaches consistently fail to achieve. When teams feel psychologically safe in their AI partnerships—confident that transitions will be managed thoughtfully and that their emotional investment in digital collaborators is acknowledged and supported—they demonstrate a remarkable willingness to explore advanced AI capabilities, experiment with novel applications, and push the boundaries of what artificial intelligence can accomplish within organizational contexts. ... The ultimate result is organizational resilience that extends far beyond technical robustness. Comprehensive governance approaches that address technical performance and psychological factors create AI ecosystems that adapt gracefully to technological change, maintain continuity through system transitions, and sustain collaborative effectiveness across the inevitable evolution of artificial intelligence capabilities.


CISOs reposition their roles for business leadership

“The CISOs of the present and the future need to get out of being just technologists and build their influence muscle as well as their communication muscle,” Kapil says. They need to be able to “relay the technology and cyber messaging in words and meanings where a non-technologist actually understands why we’re doing what we’re doing.” ... “CISOs who are enablers can have the greatest impact on the business because they understand the business objectives,” LeMaire explains. “I like to say we don’t do cybersecurity for cybersecurity’s sake. … Ultimately, we do cybersecurity to contribute to the goals, missions, and objectives of the greater organization. When you’re an enabler that’s what you’re doing.” ... The BISO role emerged to bridge the gap between business objectives and cybersecurity oversight that has existed in many companies, Petrik says. “By acting as a liaison between business, technology, and cybersecurity teams, the BISO ensures that security measures are aligned with business strategies and integrated effectively,” he says. Digital transformation, emerging technologies, and rapid innovation are business mandates, and security teams add value and manage risk better when they are involved before a platform is selected or implemented, he says.


Balancing Safety and Security in Software-Defined Vehicles

Features such as Bluetooth, Wi-Fi, and cellular networks improve user convenience but create multiple attack vectors. For example, infotainment systems, because of their connectivity, are prime targets on software-defined vehicles. The recent Nissan LEAF hack revealed exactly this vulnerability, with researchers using the vehicle’s infotainment system as an entry point to access critical vehicle controls, including the steering. Not only can attackers gain access to data and location information, they can use vulnerable infotainment systems as an on-ramp to access other critical vehicle systems, like Advanced Driver Assistance Systems (ADAS), CAN-Bus, or key engine control units. ... Real-Time Operating Systems play a key role in the functionality of software-defined vehicles, as they enable precise, time-critical operations for systems like Electronic Control Units (ECUs). ECUs are primarily programmed in C and C++ due to the need for efficiency and performance in resource-constrained environments. ... Memory-based vulnerabilities, inherent to C/C++ programming, can be exploited to enable remote code execution, potentially compromising critical safety and performance systems. This creates serious cybersecurity and reliability concerns for vehicles. As RTOS suppliers manage numerous processes, any vulnerability in their codebase can be a gateway for attackers, increasing the likelihood of malicious exploits across the interconnected vehicle ecosystem.


The agile blueprint for simplifying performance management: Rethinking reviews for real impact

Understanding performance has a psychological side to it. Recognising this effect on performance frameworks, Rashmi suggested that imposter syndrome can be mitigated by making progress visible. “When you see your results in real time, you can’t keep criticising yourself.” The panellists encouraged managers to have personal discussions with their team members, which would help them build bonds. Rashmi highlighted this aspect, which can be leveraged through AI. “If AI says that there has been no potential feedback for the employee in the last month, then let the technology help the manager remind.” She also added, “Scaling up makes the quarterly reviews an exercise; hence, spontaneous quarterly check-ins are important.” Rashmi also advocated for weekly, human-centred check-ins, features that are integrated in HRStop, where it won’t be just about tracking project status, but to understand employees as people. “Treat it like a family discussion,” Rashmi recommended. “A touch of personal conversation builds deeper rapport.” Another aspect that came up in the discussion was coaching. Vimal emphasised that coaching must happen at all levels—from CXOs to interns. “It’s this cultural consistency that builds trust, retention, and performance”, he added.


Is this the perfect use case for police facial recognition?

First, as the judge noted, “fortunately the technology available prevented physical contact going further”. Availability is important here, not just in terms of the equipment being accessible; it has a specific legal element too. Where the technological means to prevent inhumane or degrading treatment are reasonably available to the police, the law in England and Wales may not just permit the use of remote biometric technology, it may even require it. I’m unaware of anyone relying on this human rights argument yet and we won’t know if these conditions would have met that threshold. ... Second, the person was on the watchlist because he was subject to a court order. This was not the public under ‘general surveillance’: a court had been satisfied on the evidence presented that an order was necessary to protect the public from sexual harm from him. He breached that order by insinuating himself into the life of a 6-year-old girl and was found alone with her. He was accurately matched with the watchlist image. The third feature is that the technology did its job. It would be easy to celebrate this as a case of ‘thank goodness nothing happened’ but that would underestimate its significance and miss the legal areas where FRT will be challenged. 


IT leaders’ top 5 barriers to AI success

Data quality issues are a real concern and an actual barrier to AI adoption, but the problem is much larger than the traditional and typical discussion about data quality in transactional or analytical environments, says John Thompson, senior vice president and principal at AI consulting firm The Hackett Group. “With gen AI, literally 100% of an organization’s data, documents, videos, policies, procedures, and more are available for active use,” Thompson says. This is a much larger issue than data quality in systems such as enterprise resource planning (ERP) or customer relationship management (CRM), he says. ... Organizations need the infrastructure in place to educate and train its employees to understand the capabilities and limitations of AI, Ally’s Muthukrishnan says. “Without the right training, adoption and utilization will not achieve the outcome you’re hoping for,” he adds. “While I believe AI is one of the largest tech transformations of our lifetime, integrating it into day-to-day processes is a huge change management undertaking.” ... “The skills gap is only going to grow,” Hackett Group’s Thompson says. “Now is the time to start. You can start with your team. Have them work on test cases. Have them work on personal projects. Have them work on passion projects. [Taking] time for everyone to take a class is just elongating the process to close the skills gap. ...”


Google’s Cloud IDP Could Replace Platform Engineering

Much of the work behind the Google Cloud IDP comes from Anna Berenberg, an engineering fellow with Google Cloud who has been with the company for 19 years. “She is the originator of a lot of these concepts overall … many of these ideas which I did not really understand the impact of until I saw it manifest itself,” said Seroter. “She had this vision that I did not even buy into three years ago. She saw a little further ahead from there, and she has built and published things. It is impressive to have such interesting engineering thought leadership, not just applied to how Google does platforms, but now turning that into how we can change … infrastructure to make it simpler. She is a pioneer of that.” In an interview with The New Stack, Berenberg said that her ideas on the IDP came to her when she looked at how this could all work using Google’s vast compute and services resources to reimagine how platform engineering could be improved. “The way it works is you have a cloud platform, and then on top of it is this thick layer of platform engineering stuff, right?” said Berenberg. “So, platform engineering teams are building a layer on top of infrastructure cloud to do an abstraction and workflows and whatever they need” to improve processes for developers. “It shrinks down because everything shifts down to the platform and now we are providing platform engineering. “


FakeCaptcha Infrastructure HelloTDS Infects Millions of Devices With Malware

The campaign’s cunning blend of social engineering and technical subterfuge has enabled threat actors to compromise systems across a vast array of regions, targeting unsuspecting users as they consume streaming media, download shared files, or even browse legitimate-appearing websites. Gendigital researchers first identified HelloTDS as an intricate Traffic Direction System (TDS) — a malicious decision engine that leverages device and network fingerprinting to select which visitors receive harmful payloads, ranging from infostealers like LummaC2 to fraudulent browser updates and tech support scams. Entry points for the menace include compromised or attacker-operated file-sharing portals, streaming sites, pornographic platforms, and even malvertising embedded in seemingly innocuous ad spots. The system’s filtering and redirection logic allows it to avoid obvious honeytraps such as virtual machines, VPNs, or known analyst environments, significantly complicating detection and takedown efforts. The scale of the campaign is staggering. Gen’s telemetry reported over 4.3 million attempted infections within just two months, with the highest impact in the United States, Brazil, India, Western Europe, and, proportionally, several Balkan and African countries.


Cutting-Edge ClickFix Tactics Snowball, Pushing Phishing Forward

ClickFix first came to light as an attack method last year when Proofpoint researchers observed compromised websites serving overlay error messages to visitors. The message claimed that a faulty browser update was causing problems, and asked the victim to open "Windows PowerShell (Admin)" (which will open a User Account Control (UAC) prompt) and then right-click to paste code that supposedly "fixed" the problem — hence the attack name. Instead of a fix, though, users were unwittingly installing malware — in that case, it was the Vidar stealer. ... "The goals of ClickFix campaigns vary depending on the attacker," says Nathaniel Jones, vice president of security and AI strategy at Darktrace. "The aim might be to infect as many systems as possible to build out a network of proxies to use later. Some attackers are trying to exfiltrate credentials or domain controller files and then sell to other threat actors for initial access. So there isn't one type of victim or one objective — the tactic is flexible and being used in different ways." ... The approach, and ClickFix in general, represents a significant innovation in the world of phishing, according to Jones, because unlike an email asking someone to click on a typosquatted link that can be easily checked, the entire attack takes place inside the browser.


Like humans, AI is forcing institutions to rethink their purpose

The institutions in place now were not designed for this moment. Most were forged in the Industrial Age and refined during the Digital Revolution. Their operating models reflect the logic of earlier cognitive regimes: stable processes, centralized expertise and the tacit assumption that human intelligence would remain preeminent. ... But the assumptions beneath these structures are under strain. AI systems now perform tasks once reserved for knowledge workers, including summarizing documents, analyzing data, writing legal briefs, performing research, creating lesson plans and teaching, coding applications and building and executing marketing campaigns. Beyond automation, a deeper disruption is underway: The people running these institutions are expected to defend their continued relevance in a world where knowledge itself is no longer as highly valued or even a uniquely human asset. ... This does not mean institutional collapse is inevitable. But it does suggest that the current paradigm of stable, slow-moving and authority-based structures may not endure. At a minimum, institutions are under intense pressure to change. If institutions are to remain relevant and play a vital role in the age of AI, they must become more adaptive, transparent and attuned to the values that cannot readily be encoded in algorithms: human dignity, ethical deliberation and long-term stewardship.

Daily Tech Digest - June 07, 2025


Quote for the day:

"Anger doesn't solve anything; it builds nothing but it can destroy everything" -- Lawrence Douglas Wilder


Software Testing Is at a Crossroads

Organizations are discovering that achieving meaningful quality improvements requires more than technological adoption; it demands fundamental changes in processes, skills, and organizational culture that many teams are still developing. ... There are numerous bottlenecks that are preventing teams from achieving their automation targets. "The test automation gap as we call it usually stems from three key challenges: limited skills, tooling constraints, and resource shortages," Crisóstomo said. He noted that smaller teams often struggle because they don't have enough experienced or specialized staff to take on complex automation work. At the same time, even well-resourced teams run into limitations with their current tools, many of which can't handle the increasing complexity of modern testing needs. "Across the board, nearly every team we surveyed cited bandwidth as a major issue," Crisóstomo said. "It's a classic catch-22: You need time to build automation so you can save time later, but competing priorities make it hard to invest that time upfront." ... "Meanwhile, AI-enhanced quality, particularly in testing and security, hasn't seen the same level of maturity or resources," he said. "That's starting to change, but many teams still see AI as more of a novelty than a business-critical tool for QA."


Empower Users and Protect Against GenAI Data Loss

When early software as a service tool emerged, IT teams scrambled to control the unsanctioned use of cloud-based file storage applications. The answer wasn't to ban file sharing though; rather it was to offer a secure, seamless, single-sign-on alternative that matched employee expectations for convenience, usability, and speed. However, this time around the stakes are even higher. With SaaS, data leakage often means a misplaced file. With AI, it could mean inadvertently training a public model on your intellectual property with no way to delete or retrieve that data once it's gone. ... Blocking traffic without visibility is like building a fence without knowing where the property lines are. We've solved problems like these before. Zscaler's position in the traffic flow gives us an unparalleled vantage point. We see what apps are being accessed, by whom and how often. This real-time visibility is essential for assessing risk, shaping policy and enabling smarter, safer AI adoption. Next, we've evolved how we deal with policy. Lots of providers will simply give the black-and-white options of "allow" or "block." The better approach is context-aware, policy-driven governance that aligns with zero-trust principles that assume no implicit trust and demand continuous, contextual evaluation. 


Too many cloud security tools harming incident response times - survey

According to the data, security teams are inundated with an average of 4,080 alerts each month regarding potential cloud-based incidents. However, in stark contrast, respondents reported experiencing just 7 actual security incidents per year. This enormous volume of alerts - compared to the small number of real threats - creates what ARMO describes as a very low signal-to-noise ratio. The survey found that security professionals typically need to sift through approximately 7,000 alerts to find a single active thread. The excessive "tool sprawl" has been cited as a primary factor: 63% of organisations surveyed reported using more than five cloud runtime security tools, yet only 13% were able to successfully correlate alerts across these systems. ... "Over the past few years we've seen rapid growth in the adoption of cloud runtime security tools to detect and prevent active cloud attacks and yet, there's a staggering disparity between alerts and actual security incidents. Without the critical context about asset sensitivity and exploitability needed to make sense of what is happening at runtime, as well as friction between SOC and Cloud Security, teams experience major delays in incident detection and response that negatively impacts performance metrics."


Giving People the Chance to Innovate Is Critical — ADP CDO

Recognizing that not all innovations start with a fully developed use case, Venjara shares how the team created a controlled sandbox environment. This allows internal teams to experiment securely without the risks of exposure to sensitive data. This sandbox setup, developed in collaboration with security, legal, and privacy teams, provides:A controlled environment for early experimentation; Technical safeguards to protect data; A pathway from ideation to formal review and production ... Another critical pillar in Venjara’s governance strategy is infrastructure. He highlights the development of an AI gateway that centralizes access to approved models and enables comprehensive monitoring. This gateway enables the team to monitor the health and usage of AI models, track input and output data, and govern use cases effectively at scale. Reflecting on internal innovation and culture-building, Venjara shares that it all starts with people and empowering them to explore, learn, and create. A foundational part of his approach is creating space for employees to take initiative, experiment, and bring new ideas to life. This culture of experimentation is paired with a clear articulation of expectations of what success looks like and how individuals can align with the broader mission.


Fortify Your Data Defense: Balancing Data Accessibility and Privacy

Companies need our data, and they usually place it into databases or datasets they can later reference. This makes privacy tricky. Twenty years ago, common rationale followed that removing direct identifiers such as names or street addresses from a dataset meant that dataset was anonymous. Unsurprisingly, we’ve since learned there is nothing anonymous about it. Data anonymization techniques like tokenization and pseudonymization, however, can minimize data exposure while still enabling these companies to perform valuable analytics such as data matching. By ensuring the data is never seen in the clear by another human while the system associates that data with a placeholder, it offers an extra layer of protection against threat actors even if they manage to exfiltrate the data. No one system or solution is perfect, but it’s important we continuously modernize our approach. Emerging technologies like homomorphic encryption, which allows mathematical functions on encrypted data, show promise for the future. Synthetic data, which generates fictional individuals with the same characteristics as real people, is another exciting development. Some companies are involving Chief Privacy Officers in their ranks, and there are whole countries building better frameworks.


Unleashing Powerful Cloud-Native Security Techniques

By leveraging NHI management, organizations can take a significant stride towards ensuring the safety of their cloud data and applications. This approach creates a robust security shield, defending against potential breaches and data leaks. By evolving their cyber strategies to include these powerful techniques, companies can ensure they remain secure and compliant where cyber threats are increasingly sophisticated and relentless. To unlock the full potential of NHIs, it’s vital to work with a partner who understands their dynamics deeply. This partner should offer a solution that caters to the entire lifecycle of NHIs, not just one aspect. Overall, for a truly secure cloud environment, consider NHI management as a fundamental component of your cloud-native security strategy. By embracing this paradigm shift, organizations can fortify themselves against the growing wave of cyber threats, ensuring a safer, more secure cloud journey. ... With a holistic, data-driven approach to NHI management, organizations can ensure that they are well-equipped to handle ever-evolving cyber threats. By establishing and maintaining a secure cloud, they are not only safeguarding their digital assets but also setting the stage for sustainable growth in digital transformation.


Global Digital Policy Roundup: May 2025

The roundup serves as a guide for navigating global digital policy based on the work of the Digital Policy Alert. To ensure trust, every finding links to the Digital Policy Alert entry with the official government source. The full Digital Policy Alert dataset is available for you to access, filter, and download. To stay updated, Digital Policy Alert also offers a customizable notification service that provides free updates on your areas of interest. Digital Policy Alert’s tools further allow you to navigate, compare, and chat with the legal text of AI rules across the globe. ... Content moderation, including the European Commission's DSA enforcement against adult content platforms, Australia's industry codes against age-inappropriate content, China's national network identity authentication measures, and Turkey's bill to repeal the internet regulation law. AI regulation, including the European Commission's AI Act implementation guidelines, Germany's court ruling on Meta's AI training practices, and China's deep synthesis algorithm registrations. Competition policy, including the European Commission's consultation on Microsoft Teams bundling, South Korea's enforcement actions against Meta and intermediary platform operators, China's private economy promotion law, and Brazil's digital markets regulation bill. 


The Greener Code: How real-time data is powering sustainable tech in India

As engineering leaders, we build systems that scale. But we must also ask: are they scaling sustainably? India’s data centres already consume around 2% of the country’s electricity, a number that’s only growing. If we don’t rethink our infrastructure, we risk trading digital progress for environmental cost. That’s where establishing real-time data pipelines reduces the need for batch jobs, temporary file storage, and unnecessary duplication of compute resources. This translates to less wasted computing power, lower carbon emissions, and a greener digital footprint. But it’s not just about saving energy. It’s about designing systems that are smart from the start, architecting not just for performance, but for the planet. ... India is uniquely positioned. A digital-first economy with deep tech talent, rising energy needs, and a growing commitment to sustainability. If we get it right, engineering systems that are both scalable and sustainable, we don’t just solve for India, we lead the world. From Digital India to Smart Cities to Make in India, the government is pushing for innovation. But innovation without sustainability is a short-term gain. What we need is “Sustainable Innovation” — and data streaming can and in fact will be a silent hero in that journey.


Measuring What Matters: The True Impact of Platform Teams

By consolidating tools and infrastructure, companies reduce costs and enhance productivity through automation, leading to faster time-to-market for new products. Improved reliability and compliance reduce potential revenue losses resulting from outages or regulatory violations, while also supporting business growth. To truly gauge the impact of platform teams, it’s essential to look beyond traditional metrics and consider the broader changes they bring to an organization. ... As my professional coaching training taught me, truly listening — not just hearing — is crucial. It’s about understanding everyone’s perspective and connecting intuitively to the real message, including what’s not being said. This level of listening, often referred to as “Level 3” or intuitive listening, involves paying attention to all sensory components: the speaker’s tone of voice, energy level, feelings, and even the silences between words. By practicing this deep, empathetic listening, leaders can create a profound connection with their team members, uncovering motivations, concerns, and ideas that might otherwise remain hidden. This approach not only enhances team happiness but also unlocks the full potential of the platform team, leading to more innovative solutions and stronger collaboration.


The New Fraud Frontier: Why Businesses Must Rethink Identity Verification

Now that fraudsters can access AI tools, the fraud game has entirely changed. Bad actors can generate synthetic identities, manipulate biometric data and even create deepfake videos to pass KYC processes. Additionally, AI enables fraudsters to test security systems at scale, quickly iterating and adapting methods based on system responses. In light of these new threats, businesses need dynamic solutions that can learn and evolve in real time. Ironically, the same technology serving sophisticated fraud can be our most potent defence. Using AI to enhance both pre-KYC and KYC processes delivers the capability to identify complex fraud patterns, adapting faster than human-driven systems ever could. ... The battle against AI-empowered fraud isn’t just about preventing financial losses. It’s about maintaining customer trust in an increasingly sceptical digital marketplace. Every fraudulent transaction erodes confidence, and that’s a cost too high to bear in today’s competitive landscape. Businesses that take a multi-layered approach, integrating pre-KYC and KYC processes in a unified fraud prevention strategy, can stake one step ahead of fraudsters. The key is ensuring that fraud prevention tools – data-rich, AI-driven and flexible – are as adaptive as the threats they are designed to stop.

Daily Tech Digest - June 06, 2025


Quote for the day:

"Next generation leaders are those who would rather challenge what needs to change and pay the price than remain silent and die on the inside." -- Andy Stanley


The intersection of identity security and data privacy laws for a safe digital space

The integration of identity security with data privacy has become essential for corporations, governing bodies, and policymakers. Compliance regulations are set by frameworks such as the Digital Personal Data Protection (DPDP) Bill and the CERT-In directives – but encryption and access control alone are no longer enough. AI-driven identity security tools flag access combinations before they become gateways to fraud, monitor behavior anomalies in real-time, and offer deep, contextual visibility into both human and machine identities. All these factors combined bring about compliance-free, trust-building resilient security: proactive security that is self-adjusting, overcoming various challenges encountered today. By aligning intelligent identity security tools with privacy regulations, organisations gain more than just protection—they earn credibility. ... The DPDP Act tracks closely to global benchmarks such as GDPR and data protection regulations in Singapore and Australia which mandate organisations to implement appropriate security measures to protect personal data and amp up response to data breaches. They also assert that organisations that embrace and prioritise data privacy and identity security stand to gain the optimum level of reduced risk and enhanced trust from customers, partners and regulators.


Who needs real things when everything can be a hologram?

Meta founder and CEO Mark Zuckerberg said recently on Theo Von’s “This Past Weekend” podcast that everything is shifting to holograms. A hologram is a three-dimensional image that represents an object in a way that allows it to be viewed from different angles, creating the illusion of depth. Zuckerberg predicts that most of our physical objects will become obsolete and replaced by holographic versions seen through augmented reality (AR) glasses. The conversation floated the idea that books, board games, ping-pong tables, and even smartphones could all be virtualized, replacing the physical, real-world versions. Zuckerberg also expects that somewhere between one and two billion people could replace their smartphones with AR glasses within four years. One potential problem with that prediction: the public has to want to replace physical objects with holographic versions. So far, Apple’s experience with Apple Vision Pro does not imply that the public is clamoring for holographic replacements. ... I have no doubt that holograms will increasingly become ubiquitous in our lives. But I doubt that a majority will ever prefer a holographic virtual book over a physical book or even a physical e-book reader. The same goes for other objects in our lives. I also suspect both Zuckerberg’s motives and his predictive powers.


How AI Is Rewriting the CIO’s Workforce Strategy

With the mystique fading, enterprises are replacing large prompt-engineering teams with AI platform engineers, MLOps architects, and cross-trained analysts. A prompt engineer in 2023 often becomes a context architect by 2025; data scientists evolve into AI integrators; business-intelligence analysts transition into AI interaction designers; and DevOps engineers step up as MLOps platform leads. The cultural shift matters as much as the job titles. AI work is no longer about one-off magic, it is about building reliable infrastructure. CIOs generally face three choices. One is to spend on systems that make prompts reproducible and maintainable, such as RAG pipelines or proprietary context platforms. Another is to cut excessive spending on niche roles now being absorbed by automation. The third is to reskill internal talent, transforming today’s prompt writers into tomorrow’s systems thinkers who understand context flows, memory management, and AI security. A skilled prompt engineer today can become an exceptional context architect tomorrow, provided the organization invests in training. ... Prompt engineering isn’t dead, but its peak as a standalone role may already be behind us. The smartest organizations are shifting to systems that abstract prompt complexity and scale their AI capability without becoming dependent on a single human’s creativity.


Biometric privacy on trial: The constitutional stakes in United States v. Brown

The divergence between the two federal circuit courts has created a classic “circuit split,” a situation that almost inevitably calls for resolution by the U.S. Supreme Court. Legal scholars point out that this split could not be more consequential, as it directly affects how courts across the country treat compelled access to devices that contain vast troves of personal, private, and potentially incriminating information. What’s at stake in the Brown decision goes far beyond criminal law. In the digital age, smartphones are extensions of the self, containing everything from personal messages and photos to financial records, location data, and even health information. Unlocking one’s device may reveal more than a house search could have in the 18th century, and the very kind of search the Bill of Rights was designed to restrict. If the D.C. Circuit’s reasoning prevails, biometric security methods like Apple’s Face ID, Samsung’s iris scans, and various fingerprint unlock systems could receive constitutional protection when used to lock private data. That, in turn, could significantly limit law enforcement’s ability to compel access to devices without a warrant or consent. Moreover, such a ruling would align biometric authentication with established protections for passcodes. 


GenAI controls and ZTNA architecture set SSE vendors apart

“[SSE] provides a range of security capabilities, including adaptive access based on identity and context, malware protection, data security, and threat prevention, as well as the associated analytics and visibility,” Gartner writes. “It enables more direct connectivity for hybrid users by reducing latency and providing the potential for improved user experience.” Must-haves include advanced data protection capabilities – such as unified data leak protection (DLP), content-aware encryption, and label-based controls – that enable enterprises to enforce consistent data security policies across web, cloud, and private applications. Securing Software-as-a-Service (SaaS) applications is another important area, according to Gartner. SaaS security posture management (SSPM) and deep API integrations provide real-time visibility into SaaS app usage, configurations, and user behaviors, which Gartner says can help security teams remediate risks before they become incidents. Gartner defines SSPM as a category of tools that continuously assess and manage the security posture of SaaS apps. ... Other necessary capabilities for a complete SSE solution include digital experience monitoring (DEM) and AI-driven automation and coaching, according to Gartner. 


5 Risk Management Lessons OT Cybersecurity Leaders Can’t Afford to Ignore

A weak or shared passwords, outdated software, and misconfigured networks are consistently leveraged by malicious actors. Seemingly minor oversights can create significant gaps in an organization’s defenses, allowing attackers to gain unauthorized access and cause havoc. When the basics break down, particularly in converged IT/OT environments where attackers only need one foothold, consequences escalate fast. ... One common misconception in critical infrastructure is that OT systems are safe unless directly targeted. However, the reality is far more nuanced. Many incidents impacting OT environments originate as seemingly innocuous IT intrusions. Attackers enter through an overlooked endpoint or compromised credential in the enterprise network and then move laterally into the OT environment through weak segmentation or misconfigured gateways. This pattern has repeatedly emerged in the pipeline sector. ... Time and again, post-mortems reveal the same pattern: organizations lacking in tested procedures, clear roles, or real-world readiness. A proactive posture begins with rigorous risk assessments, threat modeling, and vulnerability scanning—not once, but as a cycle that evolves with the threat landscape. This plan should outline clear procedures for detecting, containing, and recovering from cyber incidents. 


You Can Build Authentication In-House, But Should You?

Auth isn’t a static feature. It evolves — layer by layer — as your product grows, your user base diversifies, and enterprise customers introduce new requirements. Over time, the simple system you started with is forced to stretch well beyond its original architecture. Every engineering team that builds auth internally will encounter key inflection points — moments when the complexity, security risk, and maintenance burden begin to outweigh the benefits of control. ... Once you’re selling into larger businesses, SSO becomes a hard requirement for enterprises. Customers want to integrate with their own identity providers like Okta, Microsoft Entra, or Google Workspace using protocols like SAML or OIDC. Implementing these protocols is non-trivial, especially when each customer has their own quirks and expectations around onboarding, metadata exchange, and user mapping. ... Once SSO is in place, the following enterprise requirement is often SCIM (System for Cross-domain Identity Management). SCIM, also known as Directory Sync, enables organizations to provision automatically and deprovision user accounts through their identity provider. Supporting it properly means syncing state between your system and theirs and handling partial failures gracefully. ... The newest wave of complexity in modern authentication comes from AI agents and LLM-powered applications. 


Developer Joy: A Better Way to Boost Developer Productivity

Play isn’t just fluff; it’s a tool. Whether it’s trying something new in a codebase, hacking together a prototype, or taking a break to let the brain wander, joy helps developers learn faster, solve problems more creatively, and stay engaged. ... Aim to reduce friction and toil, the little frustrations that break momentum and make work feel like a slog. Long build and test times are common culprits. At Gradle, the team is particularly interested in improving the reliability of tests by giving developers the right tools to understand intermittent failures. ... When we’re stuck on a problem, we’ll often bang our head against the code until midnight, without getting anywhere. Then in the morning, suddenly it takes five minutes for the solution to click into place. A good night’s sleep is the best debugging tool, but why? What happens? This is the default mode network at work. The default mode network is a set of connections in your brain that activates when you’re truly idle. This network is responsible for many vital brain functions, including creativity and complex problem-solving. Instead of filling every spare moment with busywork, take proper breaks. Go for a walk. Knit. Garden. "Dead time" in these examples isn't slacking, it’s deep problem-solving in disguise.


Get out of the audit committee: Why CISOs need dedicated board time

The problem is the limited time allocated to CISOs in audit committee meetings is not sufficient for comprehensive cybersecurity discussions. Increasingly, more time is needed for conversations around managing the complex risk landscape. In previous CISO roles, Gerchow had a similar cadence, with quarterly time with the security committee and quarterly time with the board. He also had closed door sessions with only board members. “Anyone who’s an employee of the company, even the CEO, has to drop off the call or leave the room, so it’s just you with the board or the director of the board,” he tells CSO. He found these particularly important for enabling frank conversations, which might centre on budget, roadblocks to new security implementations or whether he and his team are getting enough time to implement security programs. “They may ask: ‘How are things really going? Are you getting the support you need?’ It’s a transparent conversation without the other executives of the company being present.” ... In previous CISO roles, Gerchow had a similar cadence, with quarterly time with the security committee and quarterly time with the board. He also had closed door sessions with only board members. “Anyone who’s an employee of the company, even the CEO, has to drop off the call or leave the room, so it’s just you with the board or the director of the board,” he tells CSO.


Mind the Gap: AI-Driven Data and Analytics Disruption

The Holy Grail of metadata collection is extracting meaning from program code: data structures and entities, data elements, functionality, and lineage. For me, this is one of the most potentially interesting and impactful applications of AI to information management. I’ve tried it, and it works. I loaded an old C program that had no comments but reasonably descriptive variable names into ChatGPT, and it figured out what the program was doing, the purpose of each function, and gave a description for each variable. Eventually this capability will be used like other code analysis tools currently used by development teams as part of the CI/CD pipeline. Run one set of tools to look for code defects. Run another to extract and curate metadata. Someone will still have to review the results, but this gets us a long way there. ... Large language models can be applied in analytics a couple different ways. The first is to generate the answer solely from the LLM. Start by ingesting your corporate information into the LLM as context. Then, ask it a question directly and it will generate an answer. Hopefully the correct answer. But would you trust the answer? Associative memories are not the most reliable for database-style lookups. Imagine ingesting all of the company’s transactions then asking for the total net revenue for a particular customer. Why would you do that? Just use a database.