Showing posts with label cloud costs. Show all posts
Showing posts with label cloud costs. Show all posts

Daily Tech Digest - July 07, 2025


Quote for the day:

"To live a creative life, we must lose our fear of being wrong." -- Anonymous


Forget the hype — real AI agents solve bounded problems, not open-world fantasies

When people imagine AI agents today, they tend to picture a chat window. A user types a prompt, and the agent responds with a helpful answer (maybe even triggers a tool or two). That’s fine for demos and consumer apps, but it’s not how enterprise AI will actually work in practice. In the enterprise, most useful agents aren’t user-initiated, they’re autonomous. They don’t sit idly waiting for a human to prompt them. They’re long-running processes that react to data as it flows through the business. They make decisions, call services and produce outputs, continuously and asynchronously, without needing to be told when to start. ... The problems worth solving in most businesses are closed-world: Problems with known inputs, clear rules and measurable outcomes. But the models we’re using, especially LLMs, are inherently non-deterministic. They’re probabilistic by design. The same input can yield different outputs depending on context, sampling or temperature. That’s fine when you’re answering a prompt. But when you’re running a business process? That unpredictability is a liability. ... Closed-world problems don’t require magic. They need solid engineering. And that means combining the flexibility of LLMs with the structure of good software engineering. 


Has CISO become the least desirable role in business?

Being a CISO today is not for the faint of heart. To paraphrase Rodney Dangerfield, CISOs (some, anyway) get no respect. You’d think in a job where perpetual stress over the threat of a cyberattack is the norm, there would be empathy for security leaders. Instead, they face the growing challenge of trying to elicit support across departments and managing security threats, according to a recent report from WatchGuard. ... It’s no secret CISOs are under tremendous pressure. “They’ve got the regulatory scrutiny, they’ve got public visibility,” along with the increasing complexity of threats, and “AI is just adding to that fire, and the mismatch between the accountability and the authority,” says Myers, who wrote “The CISO Dilemma,” which explores CISO turnover rates and how companies can change that moving forward. Often, CISOs don’t have the mandate to influence the business systems or processes that are creating that risk, she says. “I think that’s a real disconnect and that’s what’s really driving the burnout and turnover.” ... Some CISOs are stepping back from operational roles into more advisory ones. Patricia Titus, who recently took a position as a field CISO at startup Abnormal AI after 25 years as a CISO, does not think the CISO role has become less desirable. “The regulatory scrutiny has been there all along,” she says. “It’s gotten a light shined on it.


Enforcement Gaps in India’s DPDP Act and the case for decentralized data protection boards

The DPDP Act’s centralized enforcement model suffers from structural weaknesses that hinder effective data protection. A primary concern is the lack of independence of the Data Protection Board. Because the DPB is both appointed and funded by the Union government, with its officials classified as civil servants under central rules , it does not enjoy the institutional autonomy typically expected of a watchdog agency. ... By design, the executive branch holds decisive power over who sits on the Board and can even influence its operations through service rules. This raises a conflict of interest, given that the government itself is a major collector and processor of citizens’ data. In the words of Justice B.N. Srikrishna, having a regulator under government control is problematic “since the State will be the biggest data processor” – a regulator must be “free from the clutches of the Government” to fairly oversee both private and government actors . ... Another structural limitation is the potential for executive interference in enforcement actions, which dilutes accountability. The DPDP Act contains provisions such as Section 27(3) enabling the Central Government to issue directions that the DPB “may modify or suspend” its own orders based on a government reference . 


The Good AI: Cultivating Excellence Through Data

In today’s enterprise landscape, the quality of AI systems depends fundamentally on the data that flows through them. While most organizational focus remains on AI models and algorithms, it’s the often-under-appreciated current of data flowing through these systems that truly determines whether an AI application becomes “good AI” or problematic technology. Just as ancient Egyptians developed specialized irrigation techniques to cultivate flourishing agriculture, modern organizations must develop specialized data practices to cultivate AI that is effective, ethical, and beneficial. My new column, “The Good AI,” will examine how proper data practices form the foundation for responsible and high-performing AI systems. We’ll explore how organizations can channel their data resources to create AI applications that are not just powerful, but trustworthy, inclusive, and aligned with human values. ... As organizations increasingly integrate artificial intelligence into their operations, the need for robust AI governance has never been more critical. However, establishing effective AI governance doesn’t happen in a vacuum—it must be built upon the foundation of solid data governance practices. The path to responsible AI governance varies significantly depending on your organization’s current data governance maturity level.


AI Infrastructure Inflection Point: 60% Cloud Costs Signal Time to Go Private

Perhaps the most immediate challenge facing IT teams identified in the research is the dramatic cost scaling of public cloud AI workloads. Unlike traditional applications where cloud costs scale somewhat linearly, AI workloads create exponential cost curves due to their intensive compute and storage requirements. The research identifies a specific economic threshold where cloud costs become unsustainable. When monthly cloud spending for a given AI workload reaches 60-70% of what it would cost to purchase and operate dedicated GPU-powered infrastructure, organizations hit their inflection point. At this threshold, the total cost of ownership calculation shifts decisively toward private infrastructure. IT teams can track this inflection point by monitoring data and model-hosting requirements relative to GPU transaction throughput. ... Identifying when to move from a public cloud to private cloud or some form of on-premises deployment is critical. Thomas noted that there are many flavors of hybrid FinOps tooling available in the marketplace that, when configured appropriately for an environment, will spot trend anomalies. Anomalies may be triggered by swings in GPU utilization, costs per token/inferences, idle percentages, and data-egress fees. On-premises factors include material variations in hardware, power, cooling, operations, and more over a set period of time.


AI built it, but can you trust it?

AI isn’t inherently bad nor inherently good from a security perspective. It’s another tool that can accelerate and magnify both good and bad behaviors. On the good side, if models can learn to assess the vulnerability state and general trustworthiness of app components, and factor that learning into code they suggest, AI can have a positive impact on the security of the resultant output. Open source projects can already leverage AI to help find potential vulnerabilities and even submit PRs to address them, but there still needs to be significant human oversight to ensure that the results actually improve the project’s security. ... If you simply trust an AI to generate all the artifacts needed to build, deploy, and run anything sophisticated it will be very difficult to know if it’s done so well and what risks it’s mitigated. In many ways, this looks a lot like the classic “curl and pipe to bash” kinds of risks that have long existed where users put blind trust in what they’re getting from external sources. Many times that can work out fine but sometimes it doesn’t. ... AI can create impressive results quickly but it doesn’t necessarily prioritize security and may in fact make many choices that degrade it. Have good architectures and controls and human experts that really understand the recommendations it’s making and can adapt and re-prompt as necessary to provide the right balance.


How to shift left on finops, and why you need to

Building cost awareness in devops requires asking an upfront question when spinning up new cloud environments. Developers and data scientists should ask if the forecasted cloud and other costs align with the targeted business value. When cloud costs do increase because of growing utilization, it’s important to relate the cost escalation to whether there’s been a corresponding increase in business value. The FinOps Foundation recommends that SaaS and cloud-driven commercial organizations measure cloud unit economics. The basic measure calculates the difference between marginal cost and marginal revenue and determines where cloud operations break even and begin to generate a profit. Other companies can use these concepts to correlate business value and cost and make smarter cloud architecture and automation decisions. ... “Engineers especially can get tunnel vision on delivering features and the art of code, and cost modeling should happen as a part of design, at the start of a project, not at the end,” says Mason of RecordPoint. “Companies generally limit the staff with access to and knowledge of cloud cost data, which is a mistake. Companies should strive to spread awareness of costs, educating users of services with the highest cost impacts, so that more people recognize opportunities to optimize or eliminate spend.”


How Cred Built Its Observability-Led Tech Stack

Third-party integrations are critical to any fintech ecosystem, and at Cred, we manage them through a rigorous, life cycle-based third-party risk management framework. This approach is designed to minimize risk and maximize reliability, with security and resilience built in from the start. Before onboarding any external partner, whether for KYC, APIs or payment rails, we conduct thorough due diligence to evaluate their security posture. Each partner is categorized as high, medium or low risk, which then informs the depth and frequency of ongoing assessments. These reviews go well beyond standard compliance checks. ... With user goals validated, our teams then move into secure architecture design. Every integration point, data exchange and system interaction are examined to preempt vulnerabilities and ensure that sensitive information is protected by default. We use ThreatShield, an internal AI-powered threat-modeling tool, to analyze documentation and architecture against the Stride framework, a threat model designed by Microsoft that is used in cybersecurity to identify potential security threats to applications and systems. This architecture-first thinking enables us to deliver powerful features, such as surfacing hidden charges in smart statements or giving credit insights without ever compromising the user's data or experience.


How To Tackle Tech Debt Without Slowing Innovation

Implement a “boy scout rule” under which developers are encouraged to make small improvements to existing code during feature work. This maintains development momentum while gradually improving code quality, and developers are more motivated to clean up code they’re already actively working with. ... Proactively analyze user engagement metrics to pinpoint friction points where users spend excessive time. Prioritize these areas for targeted debt reduction, aligning technical improvements closely with meaningful user experience enhancements. ... Pre-vacation handovers are an excellent opportunity to reduce tech debt. Planning and carrying out handovers before we take a holiday are crucial to maintaining smooth IT operations. Giving your employees the choice to hand tasks over to automation or a human colleague can help reduce tech debt and automate tasks. Critically, it utilizes time already allocated for addressing this work. ... Resolving technical debt is development. The Shangri-la of “no tech debt” does not survive contact with reality. It’s a balance of doing what’s right for the business. Making sure the product and engineering teams are on the same page is critical. You should have sprints where tech debt is the focus.


Why cybersecurity should be seen as a business enabler, not a blocker

Among the top challenges facing the IT sector today, says Jackson, is the rapid development of the tech world. “The pace of change is outpacing many organisations’ ability to adapt securely – whether due to AI, rapid cloud adoption, evolving regulatory frameworks like DORA, or the ongoing shortage of skilled cybersecurity professionals,” he says. “These challenges, combined with cost pressures and the perception that security is not always an enabler, make adaptation even harder.” AI in particular, to no surprise, is having a significant effect on the cybersecurity world – reshaping both sides of the “cybersecurity battlefield”, according to Jackson. “We’re seeing attackers utilise large language models (LLMs) like ChatGPT to scale social engineering and refine malicious code, while defenders are using the same tools (or leveraging them in some way) to enhance threat detection, streamline triage and gain broader context at much greater speed,” he says. While he doesn’t believe AI will have as great an impact as some suggest, he says it still represents an “exciting evolution”, particularly in how it can benefit organisations. “AI won’t replace individuals such as SOC analysts anytime soon, but it can augment and support their roles freeing up time to focus on higher priority tasks,” he says.

Daily Tech Digest - June 10, 2025


Quote for the day:

"Life is not about finding yourself. Life is about creating yourself." -- Lolly Daskal


AI Is Making Cybercrime Quieter and Quicker

The rise of AI-enabled cybercrime is no longer theoretical. Nearly 72% of organisations In India said that they have encountered AI-powered cyber threats in the past year. These threats are scaling fast, with a 2X increase reported by 70% and a 3X increase by 12% of organisations. This new class of AI-powered threats are harder to detect and often exploit weaknesses in human behaviour, misconfigurations, and identity systems. In India, the top AI-driven threats reported include AI-assisted credential stuffing and brute force attacks, Deepfake impersonation in business email compromise (BEC), AI-powered malware (Polymorphic malware), Automated reconnaissance of attack surfaces, and AI-generated phishing emails. ... The most disruptive threats are no longer the most obvious. Topping the list are unpatched and zero-day exploits, followed closely by insider threats, cloud misconfigurations, software supply chain attacks, and human error. These threats are particularly damaging because they often go undetected by traditional defences, exploiting internal weaknesses and visibility gaps. As a result, these quieter, more complex risks are now viewed as more dangerous than well-known threats like ransomware or phishing. Traditional threats such as phishing and malware are still growing at a rate of ~10%, but this is comparatively modest —likely due to mature defences like endpoint protection and awareness training.


The Evolution and Future of the Relationship Between Business and IT

IT professionals increasingly serve as translators — converting executive goals into technical requirements, and turning technical realities into actionable business decisions. This fusion of roles has also led to the rise of cross-functional “fusion teams,” where IT and business units co-own projects from ideation through execution. ... Artificial Intelligence is already influencing how decisions are made and systems are managed. From intelligent automation to predictive analytics, AI is redefining productivity. According to a PwC report, AI is expected to contribute over $15 trillion to the global economy by 2030 — and IT organizations will play a pivotal role in enabling this transformation. At the same time, the lines between IT and the business will continue to blur. Platforms like low-code development tools, AI copilots, and intelligent data fabrics will empower business users to create solutions without traditional IT support — requiring IT teams to pivot further into governance, enablement, and strategy. Security, compliance, and data privacy will become even more important as businesses operate across fragmented and federated environments. ... The business-IT relationship has evolved from one rooted in infrastructure ownership to one centered on service integration, strategic alignment, and value delivery. IT is no longer just the department that runs servers or writes code — it’s the nervous system that connects capabilities, ensures reliability, and enables growth.


Can regulators trust black-box algorithms to enforce financial fairness?

Regulators, in their attempt to maintain oversight and comparability, often opt for rules-based regulation, said DiRollo. These are prescriptive, detailed requirements intended to eliminate ambiguity. However, this approach unintentionally creates a disproportionate burden on smaller institutions, he continued, DiRollo said, “Each bank must effectively build its own data architecture to interpret and implement regulatory requirements. For instance, calculating Risk-Weighted Assets (RWAs) demands banks to collate data across a myriad of systems, map this data into a bespoke regulatory model, apply overlays and assumptions to reflect the intent of the rule and interpret evolving guidance and submit reports accordingly.” ... Secondly around regulatory arbitrage. In this area, larger institutions with more sophisticated modelling capabilities can structure their portfolios or data in ways that reduce regulatory burdens without a corresponding reduction in actual risk. “The implication is stark: the fairness that regulators seek to enforce is undermined by the very framework designed to ensure it,” said DiRollo. While institutions pour effort into interpreting rules and submitting reports, the focus drifts from identifying and managing real risks. In practice, compliance becomes a proxy for safety – a dangerous assumption, in the words of DiRollo.


The legal questions to ask when your systems go dark

Legal should assume the worst and lean into their natural legal pessimism. There’s very little time to react, and it’s better to overreact than underreact (or not react at all). The legal context around cyber incidents is broad, but assume the worst-case scenario like a massive data breach. If that turns out to be wrong, even better! ... Even if your organization has a detailed incident response plan, chances are no one’s ever read it and that there will be people claiming “that’s not my job.” Don’t get caught up in that. Be the one who brings together management, IT, PR, and legal at the same table, and coordinate efforts from the legal perspective. ... If that means “my DPO will check the ROPA” – congrats! But if your processes are still a work in progress, you’re likely about to run a rapid, ad hoc data inventory: involving all departments, identifying data types, locations, and access controls. Yes, it will all be happening while systems are down and everyone’s panicking. But hey – serenity now, emotional damage later. You literally went to law school for this. ... You, as in-house or external legal support, really have to understand the organization and how its tech workflows actually function. I dream of a world where lawyers finally stop saying “we’ll just do the legal stuff,” because “legal stuff” remains abstract and therefore ineffective if you don’t put it in the context of a particular organization.


New Quantum Algorithm Factors Numbers With One Qubit

Ultimately, the new approach works because of how it encodes information. Classical computers use bits, which can take one of two values. Qubits, the quantum equivalent, can take on multiple values, because of the vagaries of quantum mechanics. But even qubits, once measured, can take on only one of two values, a 0 or a 1. But that’s not the only way to encode data in quantum devices, say Robert König and Lukas Brenner of the Technical University of Munich. Their work focuses on ways to encode information with continuous variables, meaning they can take on any values in a given range, instead of just certain ones. ... In the past, researchers have tried to improve on Shor’s algorithm for factoring by simulating a qubit using a continuous system, with its expanded set of possible values. But even if your system computes with continuous qubits, it will still need a lot of them to factor numbers, and it won’t necessarily go any faster. “We were wondering whether there’s a better way of using continuous variable systems,” König said. They decided to go back to basics. The secret to Shor’s algorithm is that it uses the number it’s factoring to generate what researchers call a periodic function, which has repeating values at regular intervals. Then it uses a mathematical tool called a quantum Fourier transform to identify the value of that period — how long it takes for the function to repeat.


What Are Large Action Models?

LAMs are LLMs trained on specific actions and enhanced with real connectivity to external data and systems. This makes the agents they power more robust than basic LLMs, which are limited to reasoning, retrieval and text generation. Whereas LLMs are more general-purpose, trained on a large data corpus, LAMs are more task-oriented. “LAMs fine-tune an LLM to specifically be good at recommending actions to complete a goal,” Jason Fournier, vice president of AI initiatives at the education platform Imagine Learning, told The New Stack. ... LAMs trained on internal actions could streamline industry-specific workflows as well. Imagine Learning, for instance, has developed a curriculum-informed AI framework to support teachers and students with AI-powered lesson planning. Fournier sees promise in automating administrative tasks like student registration, synthesizing data for educators and enhancing the learning experience. Or, Willson said, consider marketing: “You could tell an agentic AI platform with LAM technology, ‘Launch our new product campaign for the ACME software across all our channels with our standard messaging framework.'” Capabilities like this could save time, ensure brand consistency, and free teams to focus on high-level strategy.


Five mistakes companies make when retiring IT equipment: And how to avoid them

Outdated or unused IT assets often sit idle in storage closets, server rooms, or even employee homes for extended periods. This delay in decommissioning can create a host of problems. Unsecured, unused devices are prime targets for data breaches, theft, or accidental loss. Additionally, without a timely and consistent retirement process, organizations lose visibility into asset status, which can create confusion, non-compliance, or unnecessary costs. The best way to address this is by implementing in-house destruction solutions as an integrated part of the IT lifecycle. Rather than relying on external vendors or waiting until large volumes of devices pile up, organizations can equip themselves with high security data destruction machinery – such as hard drive shredders, degaussers, crushers, or disintegrators – designed to render data irretrievable on demand. This allows for immediate, on-site sanitization and physical destruction as soon as devices are decommissioned. Not only does this improve data control and reduce risk exposure, but it also simplifies chain-of-custody tracking by eliminating unnecessary handoffs. With in-house destruction capabilities, organizations can securely retire equipment at the pace their operations demand – no waiting, no outsourcing, and no compromise.


Event Sourcing Unpacked: The What, Why, and How

Event Sourcing offers significant benefits for systems that require persistent audit trails, rich debugging capabilities with event replay. It is especially effective in domains like finance, healthcare, e-commerce, and IoT, where every transaction or state change is critical and must be traceable. However, its complexity means that it isn’t ideal for every scenario. For applications that primarily engage in basic CRUD operations or demand immediate consistency, the overhead of managing an ever-growing event log, handling event schema evolution, and coping with eventual consistency can outweigh the benefits. In such cases, simpler persistence models may be more appropriate. When compared with related patterns, Event Sourcing naturally complements CQRS by decoupling read and write operations, and it enhances Domain-Driven Design by providing a historical record of domain events. Additionally, it underpins Event-Driven Architectures by facilitating loosely coupled, scalable communication. The decision to implement Event Sourcing should therefore balance its powerful capabilities against the operational and developmental complexities it introduces, ensuring it aligns with the project’s specific needs and long-term architectural goals.


Using Traffic Mirroring to Debug and Test Microservices in Production-Like Environments

At its core, traffic mirroring duplicates incoming requests so that, while one copy is served by the primary service, the other is sent to an identical service running in a test or staging environment. The response from the mirrored service is never returned to the client; it exists solely to let engineers observe, compare, or process data from real-world usage. ... Real-world traffic is messy. Certain bugs only appear when a request contains a specific sequence of API calls or unexpected data patterns. By mirroring production traffic to a shadow service, developers can catch these hard-to-reproduce errors in a controlled environment. ... Mirroring production traffic allows teams to observe how a new service version handles the same load as its predecessor. This testing is particularly useful for identifying regressions in response time or resource utilization. Teams can compare metrics like CPU usage, memory consumption, and request latency between the primary and shadow services to determine whether code changes negatively affect performance. Before rolling out a new feature, developers must ensure it works correctly under production conditions. Traffic mirroring lets a new microservice version be deployed with feature flags while still serving requests from the stable version.


Don’t be a victim of high cloud costs

The simplest reason for the rising expenses associated with cloud services is that major cloud service providers consistently increase their prices. Although competition among these providers helps keep prices stable to some extent, businesses now face inflation, the introduction of new premium services, and the complex nature of pricing models, which are often shrouded in mystery. All these factors complicate cost management. Meanwhile, many businesses have inefficient usage patterns. The typical approach to adoption involves migrating existing systems to the cloud without modifying or improving their functions for cloud environments. This “lift and shift” shortcut often leads to inefficient resource allocation and unnecessary expenses. ... First, before embracing cloud technology for its advantages, companies should develop a well-defined plan that outlines the rationale, objectives, and approach to using cloud services. Identify which tasks are suitable for cloud deployment and which are not, and assess whether a public, private, or hybrid cloud setup aligns with your business and budget objectives. Second, before transferring data, ensure that you optimize your tasks to improve efficiency and performance. Please resist the urge to move existing systems to the cloud in their current state. ... Third, effectively managing cloud expenses relies on implementing strong governance practices.

Daily Tech Digest - June 01, 2025


Quote for the day:

"You are never too old to set another goal or to dream a new dream." -- C.S. Lewis


A wake-up call for real cloud ROI

To make cloud spending work for you, the first step is to stop, assess, and plan. Do not assume the cloud will save money automatically. Establish a meticulous strategy that matches workloads to the right environments, considering both current and future needs. Take the time to analyze which applications genuinely benefit from the public cloud versus alternative options. This is essential for achieving real savings and optimal performance. ... Enterprises should rigorously review their existing usage, streamline environments, and identify optimization opportunities. Invest in cloud management platforms that can automate the discovery of inefficiencies, recommend continuous improvements, and forecast future spending patterns with greater accuracy. Optimization isn’t a one-time exercise—it must be an ongoing process, with automation and accountability as central themes. Enterprises are facing mounting pressure to justify their escalating cloud spend and recapture true business value from their investments. Without decisive action, waste will continue to erode any promised benefits. ... In the end, cloud’s potential for delivering economic and business value is real, but only for organizations willing to put in the planning, discipline, and governance that cloud demands. 


Why IT-OT convergence is a gamechanger for cybersecurity

The combination of IT and OT is a powerful one. It promises real-time visibility into industrial systems, predictive maintenance that limits downtime and data-driven decision making that gives everything from supply chain efficiency to energy usage a boost. When IT systems communicate directly with OT devices, businesses gain a unified view of operations – leading to faster problem solving, fewer breakdowns, smarter automation and better resource planning. This convergence also supports cost reduction through more accurate forecasting, optimised maintenance and the elimination of redundant technologies. And with seamless collaboration, IT and OT teams can now innovate together, breaking down silos that once slowed progress. Cybersecurity maturity is another major win. OT systems, often built without security in mind, can benefit from established IT protections like centralised monitoring, zero-trust architectures and strong access controls. Concurrently, this integration lays the foundation for Industry 4.0 – where smart factories, autonomous systems and AI-driven insights thrive on seamless IT-OT collaboration. ... The convergence of IT and OT isn’t just a tech upgrade – it’s a transformation of how we operate, secure and grow in our interconnected world. But this new frontier demands a new playbook that combines industrial knowhow with cybersecurity discipline.


How To Measure AI Efficiency and Productivity Gains

Measuring AI efficiency is a little like a "chicken or the egg" discussion, says Tim Gaus, smart manufacturing business leader at Deloitte Consulting. "A prerequisite for AI adoption is access to quality data, but data is also needed to show the adoption’s success," he advises in an online interview. ... The challenge in measuring AI efficiency depends on the type of AI and how it's ultimately used, Gaus says. Manufacturers, for example, have long used AI for predictive maintenance and quality control. "This can be easier to measure, since you can simply look at changes in breakdown or product defect frequencies," he notes. "However, for more complex AI use cases -- including using GenAI to train workers or serve as a form of knowledge retention -- it can be harder to nail down impact metrics and how they can be obtained." ... Measuring any emerging technology's impact on efficiency and productivity often takes time, but impacts are always among the top priorities for business leaders when evaluating any new technology, says Dan Spurling, senior vice president of product management at multi-cloud data platform provider Teradata. "Businesses should continue to use proven frameworks for measurement rather than create net-new frameworks," he advises in an online interview. 


The discipline we never trained for: Why spiritual quotient is the missing link in leadership

Spiritual Quotient (SQ) is the intelligence that governs how we lead from within. Unlike IQ or EQ, SQ is not about skill—it is about state. It reflects a leader’s ability to operate from deep alignment with their values, to stay centred amid volatility and to make decisions rooted in clarity rather than compulsion. It shows up in moments when the metrics don’t tell the full story, when stakeholders pull in conflicting directions. When the team is watching not just what you decide, but who you are while deciding it. It’s not about belief systems or spirituality in a religious sense; it’s about coherence between who you are, what you value, and how you lead. At its core, SQ is composed of several interwoven capacities: deep self-awareness, alignment with purpose, the ability to remain still and present amid volatility, moral discernment when the right path isn’t obvious, and the maturity to lead beyond ego. ... The workplace in 2025 is not just hybrid—it is holographic. Layers of culture, technology, generational values and business expectations now converge in real time. AI challenges what humans should do. Global disruptions challenge why businesses exist. Employees are no longer looking for charismatic heroes. They’re looking for leaders who are real, reflective and rooted.


Microsoft Confirms Password Deletion—Now Just 8 Weeks Away

The company’s solution is to first move autofill and then any form of password management to Edge. “Your saved passwords (but not your generated password history) and addresses are securely synced to your Microsoft account, and you can continue to access them and enjoy seamless autofill functionality with Microsoft Edge.” Microsoft has added an Authenticator splash screen with a “Turn on Edge” button as its ongoing campaign to switch users to its own browser continues. It’s not just with passwords, of course, there are the endless warnings and nags within Windows and even pointers within security advisories to switch to Edge for safety and security. ... Microsoft wants users to delete passwords once that’s done, so no legacy vulnerability remains, albeit Google has not gone quite that far as yet. You do need to remove SMS 2FA though, and use an app or key-based code at a minimum. ... Notwithstanding these Authenticator changes, Microsoft users should use this as a prompt to delete passwords and replace them with passkeys, per the Windows-makers’ advice. This is especially true given increasing reports of two-factor authentication (2FA) bypasses that are increasingly rendering basics forms of 2FA redundant.


Sustainable cyber risk management emerges as industrial imperative as manufacturers face mounting threats

The ability of a business to adjust, absorb, and continue operating under pressure is becoming a performance metric in and of itself. It is measured not only in uptime or safety statistics. It’s not a technical checkbox; it’s a strategic commitment that is becoming the new baseline for industrial trust and continuity. At the heart of this change lies security by design. Organizations are working to integrate security into OT environments, working their way up from system architecture to vendor procurement and lifecycle management, rather than adding protections along the way and after deployment. ... The path is made more difficult by the acute lack of OT cyber skills, which could be overcome by employing specialists and establishing long-term pipelines through internal reskilling, knowledge transfer procedures, and partnerships with universities. Building sustainable industrial cyber risk management can be made more organized using the ISA/IEC 62443 industrial cybersecurity standards. Cyber defense is now a continuous, sustainable discipline rather than an after-the-fact response thanks to these widely recognized models, which also allow industries to link risk mitigation to real industrial processes, guarantee system interoperability, and measure progress against common benchmarks.


Design Sprint vs Design Thinking: When to Use Each Framework for Maximum Impact

The Design Sprint is a structured five-day process created by Jake Knapp during his time at Google Ventures. It condenses months of work into a single workweek, allowing teams to rapidly solve challenges, create prototypes, and test ideas with real users to get clear data and insights before committing to a full-scale development effort. Unlike the more flexible Design Thinking approach, a Design Sprint follows a precise schedule with specific activities allocated to each day ...
The Design Sprint operates on the principle of "together alone" – team members work collaboratively during discussions and decision-making, but do individual work during ideation phases to ensure diverse thinking and prevent groupthink. ... Design Thinking is well-suited for broadly exploring problem spaces, particularly when the challenge is complex, ill-defined, or requires extensive user research. It excels at uncovering unmet needs and generating innovative solutions for "wicked problems" that don't have obvious answers. The Design Sprint works best when there's a specific, well-defined challenge that needs rapid resolution. It's particularly effective when a team needs to validate a concept quickly, align stakeholders around a direction, or break through decision paralysis.


Broadcom’s VMware Financial Model Is ‘Ethically Flawed’: European Report

Some of the biggest issues VMware cloud partners and customers in Europe include the company increasing prices after Broadcom axed VMware’s former perpetual licenses and pay-as-you-go monthly pricing models. Another big issue was VMware cutting its product portfolio from thousands of offerings into just a few large bundles that are only available via subscription with a multi-year minimum commitment. “The current VMware licensing model appears to rely on practices that breach EU competition regulations which, in addition to imposing harm on its customers and the European cloud ecosystem, creates a material risk for the company,” said the ECCO in its report. “Their shareholders should investigate and challenge the legality of such model.” Additionally, the ECCO said Broadcom recently made changes to its partnership program that forced partners to choose between either being a cloud service provider or a reseller. “It is common in Europe for CSP to play both [service provider and reseller] roles, thus these new requirements are a further harmful restriction on European cloud service providers’ ability to compete and serve European customers,” the ECCO report said.


Protecting Supply Chains from AI-Driven Risks in Manufacturing

Cybercriminals are notorious for exploiting AI and have set their sights on supply chains. Supply chain attacks are surging, with current analyses indicating a 70% likelihood of cybersecurity incidents stemming from supplier vulnerabilities. Additionally, Gartner projects that by the end of 2025, nearly half of all global organizations will have faced software supply chain attacks. Attackers manipulate data inputs to mislead algorithms, disrupt operations or steal proprietary information. Hackers targeting AI-enabled inventory systems can compromise demand forecasting, causing significant production disruptions and financial losses. ... Continuous validation of AI-generated data and forecasts ensures that AI systems remain reliable and accurate. The “black-box” nature of most AI products, where internal processes remain hidden, demands innovative auditing approaches to guarantee reliable outputs. Organizations should implement continuous data validation, scenario-based testing and expert human review to mitigate the risks of bias and inaccuracies. While black-box methods like functional testing offer some evaluation, they are inherently limited compared to audits of transparent systems, highlighting the importance of open AI development.


What's the State of AI Costs in 2025?

This year's report revealed that 44% of respondents plan to invest in improving AI explainability. Their goals are to increase accountability and transparency in AI systems as well as to clarify how decisions are made so that AI models are more understandable to users. Juxtaposed with uncertainty around ROI, this statistic signals further disparity between organizations' usage of AI and accurate understanding of it. ... Of the companies that use third-party platforms, over 90% reported high awareness of AI-driven revenue. That awareness empowers them to confidently compare revenue and cost, leading to very reliable ROI calculations. Conversely, companies that don't have a formal cost-tracking system have much less confidence that they can correctly determine the ROI of their AI initiatives. ... Even the best-planned AI projects can become unexpectedly expensive if organizations lack effective cost governance. This report highlights the need for companies to not merely track AI spend but optimize it via real-time visibility, cost attribution, and useful insights. Cloud-based AI tools account for almost two-thirds of AI budgets, so cloud cost optimization is essential if companies want to stop overspending. Cost is more than a metric; it's the most strategic measure of whether AI growth is sustainable. As companies implement better cost management practices and tools, they will be able to scale AI in a fiscally responsible way, confidently measure ROI, and prevent financial waste.

Daily Tech Digest - May 22, 2025


Quote for the day:

"Knowledge is being aware of what you can do. Wisdom is knowing when not to do it." -- Anonymous


Consumer rights group: Why a 10-year ban on AI regulation will harm Americans

AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report. These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use. ... Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place. But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause. It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems.


Putting agentic AI to work in Firebase Studio

An AI assistant is like power steering. The programmer, the driver, remains in control, and the tool magnifies that control. The developer types some code, and the assistant completes the function, speeding up the process. The next logical step is to empower the assistant to take action—to run tests, debug code, mock up a UI, or perform some other task on its own. In Firebase Studio, we get a seat in a hosted environment that lets us enter prompts that direct the agent to take meaningful action. ... Obviously, we are a long way off from a non-programmer frolicking around in Firebase Studio, or any similar AI-powered development environment, and building complex applications. Google Cloud Platform, Gemini, and Firebase Studio are best-in-class tools. These kinds of limits apply to all agentic AI development systems. Still, I would in no wise want to give up my Gemini assistant when coding. It takes a huge amount of busy work off my shoulders and brings much more possibility into scope by letting me focus on the larger picture. I wonder how the path will look, how long it will take for Firebase Studio and similar tools to mature. It seems clear that something along these lines, where the AI is framed in a tool that lets it take action, is part of the future. It may take longer than AI enthusiasts predict. It may never really, fully come to fruition in the way we envision.


Edge AI + Intelligence Hub: A Match in the Making

The shop floor looks nothing like a data lake. There is telemetry data from machines, historical data, MES data in SQL, some random CSV files, and most of it lacks context. Companies that realize this—or already have an Industrial DataOps strategy—move quickly beyond these issues. Companies that don’t end up creating a solution that works with only telemetry data (for example) and then find out they need other data. Or worse, when they get something working in the first factory, they find out factories 2, 3, and 4 have different technology stacks. ... In comes DataOps (again). Cloud AI and Edge AI have the same problems with industrial data. They need access to contextualized information across many systems. The only difference is there is no data lake in the factory—but that’s OK. DataOps can leave the data in the source systems and expose it over APIs, allowing edge AI to access the data needed for specific tasks. But just like IT, what happens if OT doesn’t use DataOps? It’s the same set of issues. If you try to integrate AI directly with data from your SCADA, historian, or even UNS/MQTT, you’ll limit the data and context to which the agent has access. SCADA/Historians only have telemetry data. UNS/MQTT is report by exception, and AI is request/response based (i.e., it can’t integrate). But again, I digress. Use DataOps.


AI-driven threats prompt IT leaders to rethink hybrid cloud security

Public cloud security risks are also undergoing renewed assessment. While the public cloud was widely adopted during the post-pandemic shift to digital operations, it is increasingly seen as a source of risk. According to the survey, 70 percent of Security and IT leaders now see the public cloud as a greater risk than any other environment. As a result, an equivalent proportion are actively considering moving data back from public to private cloud due to security concerns, and 54 percent are reluctant to use AI solutions in the public cloud citing apprehensions about intellectual property protection. The need for improved visibility is emphasised in the findings. Rising sophistication in cyberattacks has exposed the limitations of existing security tools—more than half (55 percent) of Security and IT leaders reported lacking confidence in their current toolsets' ability to detect breaches, mainly due to insufficient visibility. Accordingly, 64 percent say their primary objective for the next year is to achieve real-time threat monitoring through comprehensive real-time visibility into all data in motion. David Land, Vice President, APAC at Gigamon, commented: "Security teams are struggling to keep pace with the speed of AI adoption and the growing complexity of and vulnerability of public cloud environments. 


Taming the Hacker Storm: Why Millions in Cybersecurity Spending Isn’t Enough

The key to taming the hacker storm is founded on the core principle of trust: that the individual or company you are dealing with is who or what they claim to be and behaves accordingly. Establishing a high-trust environment can largely hinder hacker success. ... For a pervasive selective trusted ecosystem, an organization requires something beyond trusted user IDs. A hacker can compromise a user’s device and steal the trusted user ID, making identity-based trust inadequate. A trust-verified device assures that the device is secure and can be trusted. But then again, a hacker stealing a user’s identity and password can also fake the user’s device. Confirming the device’s identity—whether it is or it isn’t the same device—hence becomes necessary. The best way to ensure the device is secure and trustworthy is to employ the device identity that is designed by its manufacturer and programmed into its TPM or Secure Enclave chip. ... Trusted actions are critical in ensuring a secure and pervasive trust environment. Different actions require different levels of authentication, generating different levels of trust, which the application vendor or the service provider has already defined. An action considered high risk would require stronger authentication, also known as dynamic authentication.


AWS clamping down on cloud capacity swapping; here’s what IT buyers need to know

For enterprises that sourced discounted cloud resources through a broker or value-added reseller (VAR), the arbitrage window shuts, Brunkard noted. Enterprises should expect a “modest price bump” on steady‑state workloads and a “brief scramble” to unwind pooled commitments. ... On the other hand, companies that buy their own RIs or SPs, or negotiate volume deals through AWS’s Enterprise Discount Program (EDP), shouldn’t be impacted, he said. Nothing changes except that pricing is now baselined. To get ahead of the change, organizations should audit their exposure and ask their managed service providers (MSPs) what commitments are pooled and when they renew, Brunkard advised. ... Ultimately, enterprises that have relied on vendor flexibility to manage overcommitment could face hits to gross margins, budget overruns, and a spike in “finance-engineering misalignment,” Barrow said. Those whose vendor models are based on RI and SP reallocation tactics will see their risk profile “changed overnight,” he said. New commitments will now essentially be non-cancellable financial obligations, and if cloud usage dips or pivots, they will be exposed. Many vendors won’t be able to offer protection as they have in the past.


The new C-Suite ally: Generative AI

While traditional GenAI applications focus on structured datasets, a significant frontier remains largely untapped — the vast swathes of unstructured "dark data" sitting in contracts, credit memos, regulatory reports, and risk assessments. Aashish Mehta, Founder and CEO of nRoad, emphasizes this critical gap.
"Most strategic decisions rely on data, but the reality is that a lot of that data sits in unstructured formats," he explained. nRoad’s platform, CONVUS, addresses this by transforming unstructured content into structured, contextual insights. ... Beyond risk management, OpsGPT automates time-intensive compliance tasks, offers multilingual capabilities, and eliminates the need for coding through intuitive design. Importantly, Broadridge has embedded a robust governance framework around all AI initiatives, ensuring security, regulatory compliance, and transparency. Trustworthiness is central to Broadridge’s approach. "We adopt a multi-layered governance framework grounded in data protection, informed consent, model accuracy, and regulatory compliance," Seshagiri explained. ... Despite the enthusiasm, CxOs remain cautious about overreliance on GenAI outputs. Concerns around model bias, data hallucination, and explainability persist. Many leaders are putting guardrails in place: enforcing human-in-the-loop systems, regular model audits, and ethical AI use policies.


Building a Proactive Defence Through Industry Collaboration

Trusted collaboration, whether through Information Sharing and Analysis Centres (ISACs), government agencies, or private-sector partnerships, is a highly effective way to enhance the defensive posture of all participating organisations. For this to work, however, organisations will need to establish operationally secure real-time communication channels that support the rapid sharing of threat and defence intelligence. In parallel, the community will also need to establish processes to enable them to efficiently disseminate indicators of compromise (IoCs) and tactics, techniques and procedures (TTPs), backed up with best practice information and incident reports. These collective defence communities can also leverage the centralised cyber fusion centre model that brings together all relevant security functions – threat intelligence, security automation, threat response, security orchestration and incident response – in a truly cohesive way. Providing an integrated sharing platform for exchanging information among multiple security functions, today’s next-generation cyber fusion centres enable organisations to leverage threat intelligence, identify threats in real-time, and take advantage of automated intelligence sharing within and beyond organisational boundaries. 


3 Powerful Ways AI is Supercharging Cloud Threat Detection

AI’s strength lies in pattern recognition across vast datasets. By analysing historical and real-time data, AI can differentiate between benign anomalies and true threats, improving the signal-to-noise ratio for security teams. This means fewer false positives and more confidence when an alert does sound. ... When a security incidents strike, every second counts. Historically, responding to an incident involves significant human effort – analysts must comb through alerts, correlate logs, identify the root cause, and manually contain the threat. This approach is slow, prone to errors, and doesn’t scale well. It’s not uncommon for incident investigations to stretch hours or days when done manually. Meanwhile, the damage (data theft, service disruption) continues to accrue. Human responders also face cognitive overloads during crises, juggling tasks like notifying stakeholders, documenting events, and actually fixing the problem. ... It’s important to note that AI isn’t about eliminating the need for human experts but rather augmenting their capabilities. By taking over initial investigation steps and mundane tasks, AI frees up human analysts to focus on strategic decision-making and complex threats. Security teams can then spend time on thorough analysis of significant incidents, threat hunting, and improving security posture, instead of constant firefighting. 


The hidden gaps in your asset inventory, and how to close them

The biggest blind spot isn’t a specific asset. It is trusting that what’s on paper is actually live and in production. Many organizations often solely focus on known assets within their documented environments, but this can create a false sense of security. Blind spots are not always the result of malicious intent, but rather of decentralized decision-making, forgotten infrastructure, or evolving technology that hasn’t been brought under central control. External applications, legacy technologies and abandoned cloud infrastructure, such as temporary test environments, may remain vulnerable long after their intended use. These assets pose a risk, particularly when they are unintentionally exposed due to misconfiguration or overly broad permissions. Third-party and supply chain integrations present another layer of complexity.  ... Traditional discovery often misses anything that doesn’t leave a clear, traceable footprint inside the network perimeter. That includes subdomains spun up during campaigns or product launches; public-facing APIs without formal registration or change control; third-party login portals or assets tied to your brand and code repositories, or misconfigured services exposed via DNS. These assets live on the edge, connected to the organization but not owned in a traditional sense. 

Daily Tech Digest - May 07, 2025


Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad


Real-world use cases for agentic AI

There’s a wealth of public code bases on which models can be trained. And larger companies typically have their own code repositories, with detailed change logs, bug fixes, and other information that can be used to train or fine-tune an AI system on a company’s internal coding methods. As AI model context windows get larger, these tools can look through more and more code at once to identify problems or suggest fixes. And the usefulness of AI coding tools is only increasing as developers adopt agentic AI. According to Gartner, AI agents enable developers to fully automate and offload more tasks, transforming how software development is done — a change that will force 80% of the engineering workforce to upskill by 2027. Today, there are several very popular agentic AI systems and coding assistants built right into integrated development environments, as well as several startups trying to break into the market with an AI focus out of the gate. ... Not every use case requires a full agentic system, he notes. For example, the company uses ChatGPT and reasoning models for architecture and design. “I’m consistently impressed by these models,” Shiebler says. For software development, however, using ChatGPT or Claude and cutting-and-pasting the code is an inefficient option, he says.


Rethinking AppSec: How DevOps, containers, and serverless are changing the rules

Application security and developers have not always been on friendly terms, but the practice indicates that innovative security solutions are bridging the gaps, bringing developers and security closer together in a seamless fashion, with security no longer being a hurdle in developers’ daily work. Quite the contrary – security is nested in CI/CD pipelines, it’s accessible, non-obstructive, and it’s gone beyond scanning for waves and waves of false-positive vulnerabilities. It’s become, and is poised to remain, about empowering developers to fix issues early, in context, and without affecting delivery and its velocity. ... Another considerate battleground is identity. With reliance on distributed microservices, each component acts as both client and server, so misconfigured identity providers or weak token validation logic make room for lateral movement and exponentially increased attack opportunities. Without naming names, there are sufficient amounts of cases illustrating how breaches can occur from token forgery or authorization header manipulations. Additional headaches are exposed APIs and shadow services. Developers create new endpoints, and due to the fast pace of the process, they can easily escape scrutiny, further emphasizing the importance of continuous discovery and dynamic testing that will “catch” those endpoints and ensure they’re covered in securing the development process.


The Hidden Cost of Complexity: Managing Technical Debt Without Losing Momentum

Outdated, fragmented, or overly complex systems become the digital equivalent of cognitive noise. They consume bandwidth, blur clarity, and slow down both decision-making and delivery. What should be a smooth flow from idea to outcome becomes a slog. ... In short, technical debt introduces a constant low-grade drag on agility. It limits responsiveness. It multiplies cost. And like visual clutter, it contributes to fatigue—especially for architects, engineers, and teams tasked with keeping transformation moving. So what can we do?Assess System Health: Inventory your landscape and identify outdated systems, high-maintenance assets, and unnecessary complexity. Use KPIs like total cost of ownership, incident rates, and integration overhead. Prioritize for Renewal or Retirement: Not everything needs to be modernized. Some systems need replacement. Others, thoughtful containment. The key is intentionality. ... Technical debt is a measure of how much operational risk and complexity is lurking beneath the surface. It’s not just code that’s held together by duct tape or documentation gaps—it’s how those issues accumulate and impact business outcomes. But not all technical debt is created equal. In fact, some debt is strategic. It enables agility, unlocks short-term wins, and helps organizations experiment quickly. 


The Cost Conundrum of Cloud Computing

When exploring cloud pricing structures, the initial costs may seem quite attractive but after delving deeper to examine the details, certain aspects may become cloudy. The pricing tiers add a layer of complexity which means there isn’t a single recurring cost to add to the balance sheet. Rather, cloud fees vary depending on the provider, features, and several usage factors such as on-demand use, data transfer volumes, technical support, bandwidth, disk performance, and other core metrics, which can influence the overall solution’s price. However, the good news is there are ways to gain control of and manage these costs. ... Whilst understanding the costs associated with using a public cloud solution is critical, it is important to emphasise that modern cloud platforms provide robust, comprehensive and cutting-edge technologies and solutions to help drive businesses forward. Cloud platforms provide a strong foundation of physical infrastructure, robust platform-level services, and a wide array of resilient connectivity and data solutions. In addition, cloud providers continually invest in the security of their solutions to physically and logically secure the hardware and software layers with access control, monitoring tools, and stringent data security measures to keep the data safe.



Operating in the light, and in the dark (net)

While the takedown of sites hosting CSA cannot be directly described in the same light, the issue is ramping up. The Internet continues to expand - like the universe - and attempting to monitor it is a never-ending challenge. As IWF’s Sexton puts it: “Right now, the Internet is so big that its sort of anonymity with obscurity.” While some emerging (and already emerged) technologies such as AI can play a role in assisting those working on the side of the light - for example, the IWF has tested using AI for triage when assessing websites with thousands of images, and AI can be trained for content moderation by industry and others, the proliferation of AI has also added to the problem.AI-generated content has now also entered the scene. From a legality standpoint, it remains the same as CSA content. Just because an AI created it, does not mean that it’s permitted - at least in the UK where IWF primarily operates. “The legislation in the UK is robust enough to cover both real material, photo-realistic synthetic content, or sheerly synthetic content. The problem it does create is one of quantity. Previously, to create CSA, it would require someone to have access to a child and conduct abuse. “Then with the rise of the Internet we also saw an increase in self-generated content. Now, AI has the ability to create it without any contact with a child at all. People now have effectively an infinite ability to generate this content.”


Why LLM applications need better memory management

Developers assume generative AI-powered tools are improving dynamically—learning from mistakes, refining their knowledge, adapting. But that’s not how it works. Large language models (LLMs) are stateless by design. Each request is processed in isolation unless an external system supplies prior context. That means “memory” isn’t actually built into the model—it’s layered on top, often imperfectly. ... Some LLM applications have the opposite problem—not forgetting too much, but remembering the wrong things. Have you ever told ChatGPT to “ignore that last part,” only for it to bring it up later anyway? That’s what I call “traumatic memory”—when an LLM stubbornly holds onto outdated or irrelevant details, actively degrading its usefulness. ... To build better LLM memory, applications need: Contextual working memory: Actively managed session context with message summarization and selective recall to prevent token overflow. Persistent memory systems: Long-term storage that retrieves based on relevance, not raw transcripts. Many teams use vector-based search (e.g., semantic similarity on past messages), but relevance filtering is still weak. Attentional memory controls: A system that prioritizes useful information while fading outdated details. Without this, models will either cling to old data or forget essential corrections.


DARPA’s Quantum Benchmarking Initiative: A Make-or-Break for Quantum Computing

While the hype around quantum computing is certainly warranted, it is often blown out of proportion. This arises occasionally due to a lack of fundamental understanding of the field. However, more often, this is a consequence of corporations obfuscating or misrepresenting facts to influence the stock market and raise capital. ... If it becomes practically applicable, quantum computing will bring a seismic shift in society, completely transforming areas such as medicine, finance, agriculture, energy, and the military, to name a few. Nonetheless, this enormous potential has resulted in rampant hype around it, while concomitantly resulting in the proliferation of bad actors seeking to take advantage of a technology not necessarily well understood by the general public. On the other hand, negativity around the technology can also cause the pendulum to swing in the other direction. ... Quantum computing is at a critical juncture. Whether it reaches its promised potential or disappears into the annals of history, much like its many preceding technologies, will be decided in the coming years. As such, a transparent and sincere approach in quantum computing research leading to practically useful applications will inspire confidence among the masses, while false and half-baked claims will deter investments in the field, eventually leading to its inevitable demise.


The reality check every CIO needs before seeking a board seat

“CIOs think technology will get them to the boardroom,” says Shurts, who has served on multiple public- and private-company boards. “Yes, more boards want tech expertise, but you have to provide the right knowledge, breadth, and depth on topics that matter to their businesses.” ... Herein lies another conundrum for CIOs seeking spots on boards. Many see those findings and think they can help with that. But the context is more important. “In your operational role as a CIO, you’re very much involved in the details, solving problems every day,” Zarmi says. “On the board, you don’t solve the problems. You help, coach, mentor, ask questions, make suggestions, and impart wisdom, but you’re not responsible for execution.” That’s another change IT leaders need to make to position themselves for board seats. Luckily, there are tools that can help them make the leap. Quinlan, for example, got a certification from the National Association of Corporate Directors (NACD), which offers a variety of resources for aspiring board members. And he took it a few steps further by attaining a financial certification. Sure, he’d been involved in P&L management, but the certification helped him understand finance at the board’s altitude. He also added a cybersecurity certification even though he runs multi-hundred-million-dollar cyber programs. “Right, but I haven’t run it at the board, and I wanted to do that,” he says.


Applying the OODA Loop to Solve the Shadow AI Problem

Organizations should have complete visibility of their AI model inventory. Inconsistent network visibility arising from siloed networks, a lack of communication between security and IT teams, and point solutions encourages shadow AI. Complete network visibility must therefore become the priority for organizations to clearly see the extent and nature of shadow AI in their systems, thus promoting compliance, reducing risk, and promoting responsible AI use without hindering innovation. ... Organizations need to identify the effect of shadow AI once it has been discovered. This includes identifying the risks and advantages of such shadow software. ... Organizations must set clearly defined yet flexible policies regarding the acceptable use of AI to enable employees to use AI responsibly. Such policies need to allow granular control from binary approval to more sophisticated levels like providing access based on users’ role and responsibility, limiting or enabling certain functionalities within an AI tool, or specifying data-level approvals where sensitive data can be processed only in approved environments. ... Organizations must evaluate and formally incorporate shadow AI tools offering substantial value to ensure their use in secure and compliant environments. Access controls need to be tightened to avoid unapproved installations; zero trust and privilege management policies can assist in this regard. 


Cisco Pulls Together A Quantum Network Architecture

It will take a quantum network infrastructure to tie create a distributed quantum computing environment possible and allow it to scale more quickly beyond the relatively small number of qubits that are found in current and near-future systems, Cisco scientists wrote in a research paper. Such quantum datacenters involve “multiple QPUs [quantum processing units] … networked together, enabling a distributed architecture that can scale to meet the demands of large-scale quantum computing,” they wrote. “Ultimately, these quantum data centers will form the backbone of a global quantum network, or quantum internet, facilitating seamless interconnectivity on a planetary scale.” ... The entanglement chip will be central to an entire quantum datacenter the vendor is working toward, with new versions of what is found in current classical networks, including switches and NICs. “A quantum network requires fundamentally new components that work at the quantum mechanics level,” they wrote. “When building a quantum network, we can’t digitize information as in classical networks – we must preserve quantum properties throughout the entire transmission path. This requires specialized hardware, software, and protocols unlike anything in classical networking.” 

Daily Tech Digest - April 29, 2025


Quote for the day:

"Don't let yesterday take up too much of today." -- Will Rogers



AI and Analytics in 2025 — 6 Trends Driving the Future

As AI becomes deeply embedded in enterprise operations and agentic capabilities are unlocked, concerns around data privacy, security and governance will take center stage. With emerging technologies evolving at speed, a mindset of continuous adaptation will be required to ensure requisite data privacy, combat cyber risks and successfully achieve digital resilience. As organizations expand their global footprint, understanding the implications of evolving AI regulations across regions will be crucial. While unifying data is essential for maximizing value, ensuring compliance with diverse regulatory frameworks is mandatory. A nuanced approach to regional regulations will be key for organizations navigating this dynamic landscape. ... As the technology landscape evolves, continuous learning becomes essential. Professionals must stay updated on the latest technologies while letting go of outdated practices. Tech talent responsible for building AI systems must be upskilled in evolving AI technologies. At the same time, employees across the organization need training to collaborate effectively with AI, ensuring seamless integration and success. Whether through internal upskilling or embarking on skills-focused partnerships, investment in talent management will prove crucial to winning the tech-talent gold rush and thriving in 2025 and beyond.


Generative AI is not replacing jobs or hurting wages at all, say economists

The researchers looked at the extent to which company investment in AI has contributed to worker adoption of AI tools, and also how chatbot adoption affected workplace processes. While firm-led investment in AI boosted the adoption of AI tools — saving time for 64 to 90 percent of users across the studied occupations — chatbots had a mixed impact on work quality and satisfaction. The economists found for example that "AI chatbots have created new job tasks for 8.4 percent of workers, including some who do not use the tools themselves." In other words, AI is creating new work that cancels out some potential time savings from using AI in the first place. "One very stark example that it's close to home for me is there are a lot of teachers who now say they spend time trying to detect whether their students are using ChatGPT to cheat on their homework," explained Humlum. He also observed that a lot of workers now say they're spending time reviewing the quality of AI output or writing prompts. Humlum argues that can be spun negatively, as a subtraction from potential productivity gains, or more positively, in the sense that automation tools historically have tended to generate more demand for workers in other tasks. "These new job tasks create new demand for workers, which may boost their wages, if these are more high value added tasks," he said.


Advancing Digital Systems for Inclusive Public Services

Uganda adopted the modular open-source identity platform, MOSIP, two years ago. A small team of 12, with limited technical expertise, began adapting the MOSIP platform to align with Uganda's Registration of Persons Act, gradually building internal capacity. By the time the system integrator was brought in, Uganda incorporated digital public good, DPG, into its legal framework, providing the integrator with a foundation to build upon. This early customization helped shape the legal and technical framework needed to scale the platform. But improvements are needed, particularly in the documentation of the DPG. "Standardization, information security and inclusion were central to our work with MOSIP," Kisembo said. "Consent became a critical focus and is now embedded across the platform, raising awareness about privacy and data protection." ... Nigeria, with a population of approximately 250 million, is taking steps to coordinate its previously fragmented digital systems through a national DPI framework. The country deployed multiple digital solutions over the last 10 to 15 years, which were often developed in silos by different ministries and private sector agencies. In 2023 and 2024, Nigeria developed a strategic framework to unify these systems and guide its DPI adoption. 


Eyes, ears, and now arms: IoT is alive

In just a few years, devices at home and work started including cameras to see and microphones to hear. Now, with new lines of vacuums and emerging humanoid robots, devices have appendages to manipulate the world around them. They’re not only able to collect information about their environment but can touch, “feel”, and move it. ... But, knowing the history of smart devices getting hacked, there’s cause for concern. From compromised baby monitors to open video doorbell feeds, bad actors have exploited default passwords and unencrypted communications for years. And now, beyond seeing and hearing, we’re on the verge of letting devices roam around our homes and offices with literal arms. What’s stopping a hacked robot vacuum from tampering with security systems? Or your humanoid helper from opening the front door? ... If developers want robots to become a reality, they need to create confidence in these systems immediately. This means following best practice cybersecurity by enabling peer-to-peer connectivity, outlawing generic credentials, and supporting software throughout the device lifecycle. Likewise, users can more safely participate in the robot revolution by segmenting their home networks, implementing multi-factor authentication, and regularly reviewing device permissions.


How to Launch a Freelance Software Development Career

Finding freelance work can be challenging in many fields, but it tends to be especially difficult for software developers. One reason is that many software development projects do not lend themselves well to a freelancing model because they require a lot of ongoing communication and maintenance. This means that, to freelance successfully as a developer, you'll need to seek out gigs that are sufficiently well-defined and finite in scope that you can complete within a predictable period of time. ... Specifically, you need to envision yourself also as a project manager, a finance director, and an accountant. When you can do these things, it becomes easier not just to freelance profitably, but also to convince prospective clients that you know what you're doing and that they can trust you to complete projects with quality and on time. ... While creating a portfolio may seem obvious enough, one pitfall that new freelancers sometimes run into is being unable to share work due to nondisclosure agreements they sign with clients. When negotiating contracts, avoid this risk by ensuring that you'll retain the right to share any key aspects of a project for the purpose of promoting your own services. Even if clients won't agree to letting you share source code, they'll often at least allow you to show off the end product and discuss at a high level how you approached and completed a project.


Digital twins critical for digital transformation to fly in aerospace

Among the key conclusions were that there was a critical need to examine the standards that currently support the development of digital twins, identify gaps in the governance landscape, and establish expectations for the future. ... The net result will be that stakeholder needs and objectives become more achievable, resulting in affordable solutions that shorten test, demonstration, certification and verification, thereby decreasing lifecycle cost while increasing product performance and availability. Yet the DTC cautioned that cyber security considerations within a digital twin and across its external interfaces must be customisable to suit the environment and risk tolerance of digital twin owners. ... First, the DTC said that evidence suggests a necessity to examine the standards that currently support digital twins, identify gaps in the governance landscape, and set expectations for future standard development. In addition, the research team identified that standardisation challenges exist when developing, integrating and maintaining digital twins during design, production and sustainment. There was also a critical need to identify and manage requirements that support interoperability between digital twins throughout the lifecycle. This recommendation also applied to the more complex SoS Digital Twins development initiatives. Digital twin model calibration needs to be an automated process and should be applicable to dynamically varying model parameters.


Quality begins with planning: Building software with the right mindset

Too often, quality is seen as the responsibility of QA engineers. Developers write the code, QA tests it, and ops teams deploy it. But in high-performing teams, that model no longer works. Quality isn’t one team’s job; it’s everyone’s job. Architects defining system components, developers writing code, product managers defining features, and release managers planning deployments all contribute to delivering a reliable product. When quality is owned by the entire team, testing becomes a collaborative effort. Developers write testable code and contribute to test plans. Product managers clarify edge cases during requirements gathering. Ops engineers prepare for rollback scenarios. This collective approach ensures that no aspect of quality is left to chance. ... One of the biggest causes of software failure isn’t building the wrong way, it’s building the wrong thing. You can write perfectly clean, well-tested code that works exactly as intended and still fail your users if the feature doesn’t solve the right problem. That’s why testing must start with validating the requirements themselves. Do they align with business goals? Are they technically feasible? Have we considered the downstream impact on other systems or components? Have we defined what success looks like?


What Makes You a Unicorn in Your Industry? Start by Mastering These 4 Pillars

First, you have to have the capacity, the skill, to excel in that area. Additionally, you have to learn how to leverage that standout aspect to make it work for you in the marketplace - incorporating it into your branding, spotlighting it in your messaging, maybe even including it in your name. Concise as the notion is, there's actually a lot of breadth and flexibility in it, for when it comes to selecting what you want to do better than anyone else is doing it, your choices are boundless. ... Consumers have gotten quite savvy at sniffing out false sincerity, so when they come across the real thing, they're much more prone to give you their business. Basically, when your client base believes you prioritize your vision, your team and creating an incredible product or service over financial gain, they want to work with you. ... Building and maintaining a remarkable "company culture" can just be a buzzword to you, or you can bring it to life. I can't think of any single factor that makes my company more valuable to my clients than the value I place on my people and the experience I endeavor to provide them by working for me. When my staff feels openly recognized, wholly supported and vitally important to achieving our shared outcomes, we're truly unstoppable. So keep in mind that your unicorn focus can be internal, not necessarily client-facing.



Conquering the costs and complexity of cloud, Kubernetes, and AI

While IT leaders clearly see the value in platform teams—nine in 10 organizations have a defined platform engineering team—there’s a clear disconnect between recognizing their importance and enabling their success. This gap signals major stumbling blocks ahead that risk derailing platform team initiatives if not addressed early and strategically. For example, platform teams find themselves burdened by constant manual monitoring, limited visibility into expenses, and a lack of standardization across environments. These challenges are only amplified by the introduction of new and complex AI projects. ... Platform teams that manually juggle cost monitoring across cloud, Kubernetes, and AI initiatives find themselves stretched thin and trapped in a tactical loop of managing complex multi-cluster Kubernetes environments. This prevents them from driving strategic initiatives that could actually transform their organizations’ capabilities. These challenges reflect the overall complexity of modern cloud, Kubernetes, and AI environments. While platform teams are chartered with providing infrastructure and tools necessary to empower efficient development, many resort to short-term patchwork solutions without a cohesive strategy. 


Reporting lines: Could separating from IT help CISOs?

CFOs may be primarily concerned with the financial performance of the business, but they also play a key role in managing organizational risk. This is where CISOs can learn the tradecraft in translating technical measures into business risk management. ... “A CFO comes through the finance ranks without a lot of exposure to IT and I can see how they’re incentivized to hit targets and forecasts, rather than thinking: if I spend another two million on cyber risk mitigation, I may save 20 million in three years’ time because an incident was prevented,” says Schat. Budgeting and forecasting cycles can be a mystery to CISOs, who may engage with the CFO infrequently, and interactions are mostly transactional around budget sign-off on cybersecurity initiatives, according to Gartner. ... It’s not uncommon for CISOs to find security seen as a barrier, where the benefits aren’t always obvious, and are actually at odds with the metrics that drive the CIO. “Security might slow down a project, introduce a layer of complexity that we need from a security perspective, but it doesn’t obviously help the customer,” says Bennett. Reporting to CFOs can relieve potential conflicts of interest. It can allow CISOs to broaden their involvement across all areas of the organization, beyond input in technology, because security and managing risk is a whole-of-business mission.