Daily Tech Digest - June 30, 2025


Quote for the day:

"Sheep are always looking for a new shepherd when the terrain gets rocky." -- Karen Marie Moning


The first step in modernization: Ditching technical debt

At a high level, determining when it’s time to modernize is about quantifying cost, risk, and complexity. In dollar terms, it may seem as simple as comparing the expense of maintaining legacy systems versus investing in new architecture. But the true calculation includes hidden costs, like the developer hours lost to patching outdated systems, and the opportunity cost of not being able to adapt quickly to business needs. True modernization is not a lift-and-shift — it’s a full-stack transformation. That means breaking apart monolithic applications into scalable microservices, rewriting outdated application code into modern languages, and replacing rigid relational data models with flexible, cloud-native platforms that support real-time data access, global scalability, and developer agility. Many organizations have partnered with MongoDB to achieve this kind of transformation. ... But modernization projects are usually a balancing act, and replacing everything at once can be a gargantuan task. Choosing how to tackle the problem comes down to priorities, determining where pain points exist and where the biggest impacts to the business will be. The cost of doing nothing will outrank the cost of doing something.


Is Your CISO Ready to Flee?

“A well-funded CISO with an under-resourced security team won’t be effective. The focus should be on building organizational capability, not just boosting top salaries.” While Deepwatch CISO Chad Cragle believes any CISO just in the role for the money has “already lost sight of what really matters,” he agrees that “without the right team, tools, or board access, burnout is inevitable.” Real impact, he contends, “only happens when security is valued and you’re empowered to lead.” Perhaps that stands as evidence that SMBs that want to retain their talent or attract others should treat the CISO holistically. “True professional fulfillment and long-term happiness in the CISO role stems from the opportunities for leadership, personal and professional growth, and, most importantly, the success of the cybersecurity program itself,” says Black Duck CISO Bruce Jenkins. “When cyber leaders prioritize the development and execution of a comprehensive, efficient, and effective program that delivers demonstrable value to the business, appropriate compensation typically follows as a natural consequence.” Concerns around budget constraints is that all CISOs at this point (private AND public sector) have been through zero-based budget reviews several times. If the CISO feels unsafe and unable to execute, they will be incentivized to find a safer seat with an org more prepared to invest in security programs.


AI is learning to lie, scheme, and threaten its creators

For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." ... "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around." Researchers are exploring various approaches to address these challenges.


The network is indeed trying to become the computer

Think of the scale-up networks such as the NVLink ports and NVLink Switch fabrics that are part and parcel of an GPU accelerated server node – or, these days, a rackscale system like the DGX NVL72 and its OEM and ODM clones. These memory sharing networks are vital for ever-embiggening AI training and inference workloads. As their parameter counts and token throughput requirements both rise, they need ever-larger memory domains to do their work. Throw in a mixture of expert models and the need for larger, fatter and faster scale-up networks, as they are now called, is obvious even to an AI model with only 7 billion parameters. ... Then there is the scale-out network, which is used to link nodes in distributed systems to each other to share work in a less tightly coupled way than the scale-up network affords. This is the normal networking we are familiar with in distributed HPC systems, which is normally Ethernet or InfiniBand and sometimes proprietary networks like those from Cray, SGI, Fujitsu, NEC, and others from days gone by. On top of this, we have the normal north-south networking stack that allows people to connect to systems and the east-west networks that allow distributed corporate systems running databases, web infrastructure, and other front-office systems to communicate with each other. 


What Can We Learn From History’s Most Bizarre Software Bugs?

“It’s never just one thing that causes failure in complex systems.” In risk management, this is known as the Swiss cheese model, where flaws that occur in one layer aren’t as dangerous as deeper flaws overlapping through multiple layers. And as the Boeing crash proves, “When all of them align, that’s what made it so deadly.” It is difficult to test for every scenario. After all, the more inputs you have, the more possible outputs — and “this is all assuming that your system is deterministic.” Today’s codebases are massive, with many different contributors and entire stacks of infrastructure. “From writing a piece of code locally to running it on a production server, there are a thousand things that could go wrong.” ... It was obviously a communication failure, “because NASA’s navigation team assumed everything was in metric.” But you also need to check the communication that’s happening between the two systems. “If two systems interact, make sure they agree on formats, units, and overall assumptions!” But there’s another even more important lesson to be learned. “The data had shown inconsistencies weeks before the failure,” Bajić says. “NASA had seen small navigation errors, but they weren’t fully investigated.”


Europe’s AI strategy: Smart caution or missed opportunity?

Companies in Europe are spending less on AI, cloud platforms, and data infrastructure. In high-tech sectors, productivity growth in the U.S. has far outpaced Europe. The report argues that AI could help close the gap, but only if it is used to redesign how businesses operate. Using AI to automate old processes is not enough. ... Feinberg also notes that many European companies assumed AI apps would be easier to build than traditional software, only to discover they are just as complex, if not more so. This mismatch between expectations and reality has slowed down internal projects. And the problem isn’t unique to Europe. As Oliver Rochford, CEO of Aunoo AI, points out, “AI project failure rates are generally high across the board.” He cites surveys from IBM, Gartner, and others showing that anywhere from 30 to 84 percent of AI projects fail or fall short of expectations. “The most common root causes for AI project failures are also not purely technical, but organizational, misaligned objectives, poor data governance, lack of workforce engagement, and underdeveloped change management processes. Apparently Europe has no monopoly on those.”


A Developer’s Guide to Building Scalable AI: Workflows vs Agents

Sometimes, using an agent is like replacing a microwave with a sous chef — more flexible, but also more expensive, harder to manage, and occasionally makes decisions you didn’t ask for. ... Workflows are orchestrated. You write the logic: maybe retrieve context with a vector store, call a toolchain, then use the LLM to summarize the results. Each step is explicit. It’s like a recipe. If it breaks, you know exactly where it happened — and probably how to fix it. This is what most “RAG pipelines” or prompt chains are. Controlled. Testable. Cost-predictable. The beauty? You can debug them the same way you debug any other software. Stack traces, logs, fallback logic. If the vector search fails, you catch it. If the model response is weird, you reroute it. ... Agents, on the other hand, are built around loops. The LLM gets a goal and starts reasoning about how to achieve it. It picks tools, takes actions, evaluates outcomes, and decides what to do next — all inside a recursive decision-making loop. ... You can’t just set a breakpoint and inspect the stack. The “stack” is inside the model’s context window, and the “variables” are fuzzy thoughts shaped by your prompts. When something goes wrong — and it will — you don’t get a nice red error message. 


Leveraging Credentials As Unique Identifiers: A Pragmatic Approach To NHI Inventories

Most teams struggle with defining NHIs. The canonical definition is simply "anything that is not a human," which is necessarily a wide set of concerns. NHIs manifest differently across cloud providers, container orchestrators, legacy systems, and edge deployments. A Kubernetes service account tied to a pod has distinct characteristics compared to an Azure managed identity or a Windows service account. Every team has historically managed these as separate concerns. This patchwork approach makes it nearly impossible to create a consistent policy, let alone automate governance across environments. ... Most commonly, this takes the form of secrets, which look like API keys, certificates, or tokens. These are all inherently unique and can act as cryptographic fingerprints across distributed systems. When used in this way, secrets used for authentication become traceable artifacts tied directly to the systems that generated them. This allows for a level of attribution and auditing that's difficult to achieve with traditional service accounts. For example, a short-lived token can be directly linked to a specific CI job, Git commit, or workload, allowing teams to answer not just what is acting, but why, where, and on whose behalf.


How Is AI Really Impacting Jobs In 2025?

Pessimists warn of potential mass unemployment leading to societal collapse. Optimists predict a new age of augmented working, making us more productive and freeing us to focus on creativity and human interactions. There are plenty of big-picture forecasts. One widely-cited WEF prediction claims AI will eliminate 92 million jobs while creating 170 million new, different opportunities. That doesn’t sound too bad. But what if you’ve worked for 30 years in one of the jobs that’s about to vanish and have no idea how to do any of the new ones? Today, we’re seeing headlines about jobs being lost to AI with increasing frequency. And, from my point of view, not much information about what’s being done to prepare society for this potentially colossal change. ... An exacerbating factor is that many of the roles that are threatened are entry-level, such as junior coders or designers, or low-skill, including call center workers and data entry clerks. This means there’s a danger that AI-driven redundancy will disproportionately hit economically disadvantaged groups. There’s little evidence so far that governments are prioritizing their response. There have been few clearly articulated strategies to manage the displacement of jobs or to protect vulnerable workers.


AGI vs. AAI: Grassroots Ingenuity and Frugal Innovation Will Shape the Future

One way to think of AAI is as intelligence that ships. Vernacular chatbots, offline crop-disease detectors, speech-to-text tools for courtrooms: examples of similar applications and products, tailored and designed for specific sectors, are growing fast. ... If the search for AGI is reminiscent of a cash-rich unicorn aiming for growth at all costs, then AAI is more scrappy. Like a bootstrapped startup that requires immediate profitability, it prizes tangible impact over long-term ambitions to take over the world. The aspirations—and perhaps the algorithms themselves—may be more modest. Still, the context makes them potentially transformative: if reliable and widely adopted, such systems could reach millions of users who have until now been on the margins of the digital economy. ... All this points to a potentially unexpected scenario, one in which the lessons of AI flow not along the usual contours of global geopolitics and economic power—but percolate rather upward, from the laboratories and pilot programs of the Global South toward the boardrooms and research campuses of the North. This doesn’t mean that the quest for AGI is necessarily misguided. It’s possible that AI may yet end up redefining intelligence.

Daily Tech Digest - June 29, 2025


Quote for the day:

“Great minds discuss ideas; average minds discuss events; small minds discuss people.” -- Eleanor Roosevelt


Who Owns End-of-Life Data?

Enterprises have never been more focused on data. What happens at the end of that data's life? Who is responsible when it's no longer needed? Environmental concerns are mounting as well. A Nature study warns that AI alone could generate up to 5 million metric tons of e-waste by 2030. A study from researchers at Cambridge University and the Chinese Academy of Sciences said top reason enterprises dispose of e-waste rather than recycling computers is the cost. E-waste can contain metals, including copper, gold, silver aluminum and rare earth elements, but proper handling is expensive. Data security is a concern as well as breach proofing doesn't get better than destroying equipment. ... End-of-life data management may sit squarely in the realm of IT, but it increasingly pulls in compliance, risk and ESG teams, the report said. Driven by rising global regulations and escalating concerns over data leaks and breaches, C-level involvement at every stage signals that end-of-life data decisions are being treated as strategically vital - not simply handed off. Consistent IT participation also suggests organizations are well-positioned to select and deploy solutions that work with their existing tech stack. That said, shared responsibility doesn't guarantee seamless execution. Multiple stakeholders can lead to gaps unless underpinned by strong, well-communicated policies, the report said.


How AI is Disrupting the Data Center Software Stack

Over the years, there have been many major shifts in IT infrastructure – from the mainframe to the minicomputer to distributed Windows boxes to virtualization, the cloud, containers, and now AI and GenAI workloads. Each time, the software stack seems to get torn apart. What can we expect with GenAI? ... Galabov expects severe disruption in the years ahead on a couple of fronts. Take coding, for example. In the past, anyone wanting a new industry-specific application for their business might pay five figures for development, even if they went to a low-cost region like Turkey. For homegrown software development, the price tag would be much higher. Now, an LLM can be used to develop such an application for you. GenAI tools have been designed explicitly to enhance and automate several elements of the software development process. ... Many enterprises will be forced to face the reality that their systems are fundamentally legacy platforms that are unable to keep pace with modern AI demands. Their only course is to commit to modernization efforts. Their speed and degree of investment are likely to determine their relevance and competitive positioning in a rapidly evolving market. Kleyman believes that the most immediate pressure will fall on data-intensive, analytics-driven platforms such as CRM and business intelligence (BI). 


AI Improves at Improving Itself Using an Evolutionary Trick

The best SWE-bench agent was not as good as the best agent designed by expert humans, which currently scores about 70 percent, but it was generated automatically, and maybe with enough time and computation an agent could evolve beyond human expertise. The study is a “big step forward” as a proof of concept for recursive self-improvement, said Zhengyao Jiang, a cofounder of Weco AI, a platform that automates code improvement. Jiang, who was not involved in the study, said the approach could made further progress if it modified the underlying LLM, or even the chip architecture. DGMs can theoretically score agents simultaneously on coding benchmarks and also specific applications, such as drug design, so they’d get better at getting better at designing drugs. Zhang said she’d like to combine a DGM with AlphaEvolve. ... One concern with both evolutionary search and self-improving systems—and especially their combination, as in DGM—is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned.


Data center costs surge up to 18% as enterprises face two-year capacity drought

Smart enterprises are adapting with creative strategies. CBRE’s Magazine emphasizes “aggressive and long-term planning,” suggesting enterprises extend capacity forecasts to five or 10 years, and initiate discussions with providers much earlier than before. Geographic diversification has become essential. While major hubs price out enterprises, smaller markets such as São Paulo saw pricing drops of as much as 20.8%, while prices in Santiago fell 13.7% due to shifting supply dynamics. Magazine recommended “flexibility in location as key, exploring less-constrained Tier 2 or Tier 3 markets or diversifying workloads across multiple regions.” For Gogia, “Tier-2 markets like Des Moines, Columbus, and Richmond are now more than overflow zones, they’re strategic growth anchors.” Three shifts have elevated these markets: maturing fiber grids, direct renewable power access, and hyperscaler-led cluster formation. “AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance.


Fintech’s AI Obsession Is Useless Without Culture, Clarity and Control

what does responsible AI actually mean in a fintech context? According to PwC’s 2024 Responsible AI Survey, it encompasses practices that ensure fairness, transparency, accountability and governance throughout the AI lifecycle. It’s not just about reducing model bias — it’s about embedding human oversight, securing data, ensuring explainability and aligning outputs with brand and compliance standards. In financial services, these aren’t "nice-to-haves" — they’re essential for scaling AI safely and effectively. Financial marketing is governed by strict regulations and AI-generated content can create brand and legal risks. ... To move AI adoption forward responsibly, start small. Low-risk, high-reward use cases let teams build confidence and earn trust from compliance and legal stakeholders. Deloitte’s 2024 AI outlook recommends beginning with internal applications that use non-critical data — avoiding sensitive inputs like PII — and maintaining human oversight throughout. ... As BCG highlights, AI leaders devote 70% of their effort to people and process — not just technology. Create a cross-functional AI working group with stakeholders from compliance, legal, IT and data science. This group should define what data AI tools can access, how outputs are reviewed and how risks are assessed.


Is Microsoft’s new Mu for you?

Mu uses a transformer encoder-decoder design, which means it splits the work into two parts. The encoder takes your words and turns them into a compressed form. The decoder takes that form and produces the correct command or answer. This design is more efficient than older models, especially for tasks such as changing settings. Mu has 32 encoder layers and 12 decoder layers, a setup chosen to fit the NPU’s memory and speed limits. The model utilizes rotary positional embeddings to maintain word order, dual-layer normalization to maintain stability, and grouped-query attention to use memory more efficiently. ... Mu is truly groundbreaking because it is the first SLM built to let users control system settings using natural language, running entirely on a mainstream shipping device. Apple’s iPhones, iPads, and Macs all have a Neural Engine NPU and run on-device AI for features like Siri and Apple Intelligence. But Apple does not have a small language model as deeply integrated with system settings as Mu. Siri and Apple Intelligence can change some settings, but not with the same range or flexibility. ... By processing data directly on the device, Mu keeps personal information private and responds instantly. This shift also makes it easier to comply with privacy laws in places like Europe and the US since no data leaves your computer.


Is It a Good Time to Be a Software Engineer?

AI may be rewriting the rules of software development, but it hasn’t erased the thrill of being a programmer. If anything, the machines have revitalised the joy of coding. New tools make it possible to code in natural language, ship prototypes in hours, and bypass tedious setup work. From solo developers to students, the process may feel more immediate or rewarding. Yet, this sense of optimism exists alongside an undercurrent of anxiety. As large language models (LLMs) begin to automate vast swathes of development, some have begun to wonder if software engineering is still a career worth betting on. ... Meanwhile, Logan Thorneloe, a software engineer at Google, sees this as a golden era for developers. “Right now is the absolute best time to be a software engineer,” he wrote on LinkedIn. He points out “development velocity” as the reason. Thorneleo believes AI is accelerating workflows, shrinking prototype cycles from months to days, and giving developers unprecedented speed. Companies that adapt to this shift will win, not by eliminating engineers, but by empowering them. More than speed, there’s also a rediscovered sense of fun. Programmers who once wrestled with broken documentation and endless boilerplate are rediscovering the creative satisfaction that first drew them to the field. 


Dumping mainframes for cloud can be a costly mistake

Despite industry hype, mainframes are not going anywhere. They quietly support the backbone of our largest banks, governments, and insurance companies. Their reliability, security, and capacity for massive transactions give mainframes an advantage that most public cloud platforms simply can’t match for certain workloads. ... At the core of this conversation is culture. An innovative IT organization doesn’t pursue technology for its own sake. Instead, it encourages teams to be open-minded, pragmatic, and collaborative. Mainframe engineers have a seat at the architecture table alongside cloud architects, data scientists, and developers. When there’s mutual respect, great ideas flourish. When legacy teams are sidelined, valuable institutional knowledge and operational stability are jeopardized. A cloud-first mantra must be replaced by a philosophy of “we choose the right tool for the job.” The financial institution in our opening story learned this the hard way. They had to overcome their bias and reconnect with their mainframe experts to avoid further costly missteps. It’s time to retire the “legacy versus modern” conflict and recognize that any technology’s true value lies in how effectively it serves business goals. Mainframes are part of a hybrid future, evolving alongside the cloud rather than being replaced by it. 


Why Modern Data Archiving Is Key to a Scalable Data Strategy

Organizations are quickly learning they can’t simply throw all data, new and old, at an AI strategy; instead, it needs to be accurate, accessible, and, of course, cost-effective. Without these requirements in place, it’s far from certain AI-powered tools can deliver the kind of insight and reliability businesses need. As part of the various data management processes involved, archiving has taken on a new level of importance. ... For organizations that need to migrate data, for example, archiving is used to identify which essential datasets, while enabling users to offload inactive data in the most cost-effective way. This kind of win-win can also be applied to cloud resources, where moving data to the most appropriate service can potentially deliver significant savings. Again, this contrasts with tiering systems and NAS gateways, which rely on global file systems to provide cloud-based access to local files. The challenge here is that access is dependent on the gateway remaining available throughout the data lifecycle because, without it, data recall can be interrupted or cease entirely. ... It then becomes practical to strike a much better balance across the typical enterprise storage technology stack, including long-term data preservation and compliance, where data doesn’t need to be accessed so often, but where reliability and security are crucial.


The Impact of Regular Training and Timely Security Policy Changes on Dev Teams

Constructive refresher training drives continuous improvement by reinforcing existing knowledge while introducing new concepts like AI-powered code generation, automated debugging and cross-browser testing in manageable increments. Teams that implement consistent training programs see significant productivity benefits as developers spend less time struggling with unfamiliar tools and more time automating tasks to focus on delivering higher value. ... Security policies that remain static as teams grow create dangerous blind spots, compromising both the team’s performance and the organization’s security posture. Outdated policies fail to address emerging threats like malware infections and often become irrelevant to the team’s current workflow, leading to workarounds and system vulnerabilities. ... Proactive security integration into development workflows represents a fundamental shift from reactive security measures to preventative strategies. This approach enables growing teams to identify and address security concerns early in the development process, reducing the cost and complexity of remediation. Cultivating a security-first culture becomes increasingly important as teams grow. This involves embedding security considerations into various stages of the development life cycle. Early risk identification in cloud infrastructure reduces costly breaches and improves overall team productivity.

Daily Tech Digest - June 25, 2025


Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein



Why data observability is the missing layer of modern networking

You might hear people use these terms interchangeably, but they’re not the same thing. Visibility is about what you can see – dashboard statistics, logs, uptime numbers, bandwidth figures, the raw data that tells you what’s happening across your network. Observability, on the other hand, is about what that data actually means. It’s the ability to interpret, analyse, and act on those insights. It’s not just about seeing a traffic spike but instead understanding why it happened. It’s not just spotting a latency issue, but knowing which apps are affected and where the bottleneck sits. ... Today, connectivity needs to be smart, agile, and scalable. It’s about building infrastructure that supports cloud, remote work, and everything in between. Whether you’re adding a new site, onboarding a remote team, or launching a cloud-hosted app, your network should be able to scale and respond at speed. Then there’s security, a non-negotiable layer that protects your entire ecosystem. Great security isn’t about throwing up walls, it’s about creating confidence. That means deploying zero trust principles, segmenting access, detecting threats in real time, and encrypting data, without making users lives harder. ... Finally, we come to observability. Arguably the most unappreciated of the three but quickly becoming essential. 


6 Key Security Risks in LLMs: A Platform Engineer’s Guide

Prompt injection is the AI-era equivalent of SQL injection. Attackers craft malicious inputs to manipulate an LLM, bypass safeguards or extract sensitive data. These attacks range from simple jailbreak prompts that override safety rules to more advanced exploits that influence backend systems. ... Model extraction attacks allow adversaries to systematically query an LLM to reconstruct its knowledge base or training data, essentially cloning its capabilities. These attacks often rely on automated scripts submitting millions of queries to map the model’s responses. One common technique, model inversion, involves strategically structured inputs that extract sensitive or proprietary information embedded in the model. Attackers may also use repeated, incremental queries with slight variations to amass a dataset that mimics the original training data. ... On the output side, an LLM might inadvertently reveal private information embedded in its dataset or previously entered user data. A common risk scenario involves users unknowingly submitting financial records or passwords into an AI-powered chatbot, which could then store, retrieve or expose this data unpredictably. With cloud-based LLMs, the risk extends further. Data from one organization could surface in another’s responses.


Adopting Agentic AI: Ethical Governance, Business Impact, Talent Demand, and Data Security

Agentic AI introduces a spectrum of ethical challenges that demand proactive governance. Given its capacity for independent decision-making, there is a heightened need for transparent, accountable, and ethically driven AI models. Ethical governance in Agentic AI revolves around establishing robust policies that govern decision logic, bias mitigation, and accountability. Organizations leveraging Agentic AI must prioritize fairness, inclusivity, and regulatory compliance to avoid unintended consequences. ... The integration of Agentic AI into business ecosystems promises not just automation but strategic enhancement of decision-making. These AI agents are designed to process real-time data, predict market shifts, and autonomously execute decisions that would traditionally require human intervention. In sectors such as finance, healthcare, and manufacturing, Agentic AI is optimizing supply chains, enhancing predictive analytics, and streamlining operations with unparalleled accuracy. ... One of the major concerns surrounding Agentic AI is data security. Autonomous decision-making systems require vast amounts of real-time data to function effectively, raising questions about data privacy, ownership, and cybersecurity. Cyber threats aimed at exploiting autonomous decision-making could have severe consequences, especially in sectors like finance and healthcare.


Unveiling Supply Chain Transformation: IIoT and Digital Twins

Digital twins and IIoTs are evolving technologies that are transforming the digital landscape of supply chain transformation. The IIoT aims to connect to actual physical sensors and actuators. On the other hand, DTs are replica copies that virtually represent the physical components. The DTs are invaluable for testing and simulating design parameters instead of disrupting production elements. ... Contrary to generic IoT, which is more oriented towards consumers, the IIoT enables the communication and interconnection between different machines, industrial devices, and sensors within a supply chain management ecosystem with the aim of business optimization and efficiency. The incubation of IIoT in supply chain management systems aims to enable real-time monitoring and analysis of industrial environments, including manufacturing, logistics management, and supply chain. It boosts efforts to increase productivity, cut downtime, and facilitate information and accurate decision-making. ... A supply chain equipped with IIoT will be a main ingredient in boosting real-time monitoring and enabling informed decision-making. Every stage of the supply chain ecosystem will have the impact of IIoT, like automated inventory management, health monitoring of goods and their tracking, analytics, and real-time response to meet the current marketplace. 


The state of cloud security

An important complicating factor in all this is that customers don’t always know what’s happening in cloud data centers. At the same time, De Jong acknowledges that on-premises environments have the same problem. “There’s a spectrum of issues, and a lot of overlap,” he says, something Wesley Swartelé agrees with: “You have to align many things between on-prem and cloud.” Andre Honders points to a specific aspect of the cloud: “You can be in a shared environment with ten other customers. This means you have to deal with different visions and techniques that do not exist on-premises.” This is certainly the case. There are plenty of worst case scenarios to consider in the public cloud. ... However, a major bottleneck remains the lack of qualified personnel. We hear this all the time when it comes to security. And in other IT fields too, as it happens, meaning one could draw a society-wide conclusion. Nevertheless, staff shortages are perhaps more acute in this sector. Erik de Jong sees society as a whole having similar problems, at any rate. “This is not an IT problem. Just ask painters. In every company, a small proportion of the workforce does most of the work.” Wesley Swartelé agrees it is a challenge for organizations in this industry to find the right people. “Finding a good IT professional with the right mindset is difficult.


As AI reshapes the enterprise, security architecture can’t afford to lag behind

Technology works both ways – it enables the attacker and the smart defender. Cybercriminals are already capitalising on its potential, using open source AI models like DeepSeek and Grok to automate reconnaissance, craft sophisticated phishing campaigns, and produce deepfakes that can convincingly impersonate executives or business partners. What makes this especially dangerous is that these tools don’t just improve the quality of attacks; they multiply their volume. That’s why enterprises need to go beyond reactive defenses and start embedding AI-aware policies into their core security fabric. It starts with applying Zero Trust to AI interactions, limiting access based on user roles, input/output restrictions, and verified behaviour. ... As attackers deploy AI to craft polymorphic malware and mimic legitimate user behaviour, traditional defenses struggle to keep up. AI is now a critical part of the enterprise security toolkit, helping CISOs and security teams move from reactive to proactive threat defense. It enables rapid anomaly detection, surfaces hidden risks earlier in the kill chain, and supports real-time incident response by isolating threats before they can spread. But AI alone isn’t enough. Security leaders must strengthen data privacy and security by implementing full-spectrum DLP, encryption, and input monitoring to protect sensitive data from exposure, especially as AI interacts with live systems. 


Identity Is the New Perimeter: Why Proofing and Verification Are Business Imperatives

Digital innovation, growing cyber threats, regulatory pressure, and rising consumer expectations all drive the need for strong identity proofing and verification. Here is why it is more important than ever:Combatting Fraud and Identity Theft: Criminals use stolen identities to open accounts, secure loans, or gain unauthorized access. Identity proofing is the first defense against impersonation and financial loss. Enabling Secure Digital Access: As more services – from banking to healthcare – go digital, strong remote verification ensures secure access and builds trust in online transactions. Regulatory Compliance: Laws such as KYC, AML, GDPR, HIPAA, and CIPA require identity verification to protect consumers and prevent misuse. Compliance is especially critical in finance, healthcare, and government sectors. Preventing Account Takeover (ATO): Even legitimate accounts are at risk. Continuous verification at key moments (e.g., password resets, high-risk actions) helps prevent unauthorized access via stolen credentials or SIM swapping. Enabling Zero Trust Security: Zero Trust assumes no inherent trust in users or devices. Continuous identity verification is central to enforcing this model, especially in remote or hybrid work environments. 


Why should companies or organizations convert to FIDO security keys?

FIDO security keys significantly reduce the risk of phishing, credential theft, and brute-force attacks. Because they don’t rely on shared secrets like passwords, they can’t be reused or intercepted. Their phishing-resistant protocol ensures authentication is only completed with the correct web origin. FIDO security keys also address insider threats and endpoint vulnerabilities by requiring physical presence, further enhancing protection, especially in high-security environments such as healthcare or public administration. ... In principle, any organization that prioritizes a secure IT infrastructure stands to benefit from adopting FIDO-based multi-factor authentication. Whether it’s a small business protecting customer data or a global enterprise managing complex access structures, FIDO security keys provide a robust, phishing-resistant alternative to passwords. That said, sectors with heightened regulatory requirements, such as healthcare, finance, public administration, and critical infrastructure, have particularly strong incentives to adopt strong authentication. In these fields, the risk of breaches is not only costly but can also have legal and operational consequences. FIDO security keys are also ideal for restricted environments, such as manufacturing floors or emergency rooms, where smartphones may not be permitted. 


Data Warehouse vs. Data Lakehouse

Data warehouses and data lakehouses have emerged as two prominent adversaries in the data storage and analytics markets, each with advantages and disadvantages. The primary difference between these two data storage platforms is that while the data warehouse is capable of handling only structured and semi-structured data, the data lakehouse can store unlimited amounts of both structured and unstructured data – and without any limitations. ... Traditional data warehouses have long supported all types of business professionals in their data storage and analytics endeavors. This approach involves ingesting structured data into a centralized repository, with a focus on warehouse integration and business intelligence reporting. Enter the data lakehouse approach, which is vastly superior for deep-dive data analysis. The lakehouse has successfully blended characteristics of the data warehouse and the data lake to create a scalable and unrestricted solution. The key benefit of this approach is that it enables data scientists to quickly extract insights from raw data with advanced AI tools. ... Although a data warehouse supports BI use cases and provides a “single source of truth” for analytics and reporting purposes, it can also become difficult to manage as new data sources emerge. The data lakehouse has redefined how global businesses store and process data. 


AI or Data Governance? Gartner Says You Need Both

Data and analytics leaders, such as chief data officers, or CDOs, and chief data and analytics officers, or CDAOs, play a significant role in driving their organizations' data and analytics, D&A, successes, which are necessary to show business value from AI projects. Gartner predicts that by 2028, 80% of gen AI business apps will be developed on existing data management platforms. Their analysts say, "This is the best time to be in data and analytics," and CDAOs need to embrace the AI opportunity eyed by others in the C-suite, or they will be absorbed into other technical functions. With high D&A ambitions and AI pilots becoming increasingly ubiquitous, focus is shifting toward consistent execution and scaling. But D&A leaders are overwhelmed with their routine data management tasks and need a new AI strategy. ... "We've never been good at governance, and now AI demands that we be even faster, which means you have to take more risks and be prepared to fail. We have to accept two things: Data will never be fully governed. Secondly, attempting to fully govern data before delivering AI is just not realistic. We need a more practical solution like trust models," Zaidi said. He said trust models provide a trust rating for data assets by examining their value, lineage and risk. They offer up-to-date information on data trustworthiness and are crucial for fostering confidence. 

Daily Tech Digest - June 24, 2025


Quote for the day:

"When you stop chasing the wrong things you give the right things a chance to catch you." -- Lolly Daskal


Why Agentic AI Is a Developer's New Ally, Not Adversary

Because agentic AI can complete complex workflows rather than simply generating content, it opens the door to a variety of AI-assisted use cases in software development that extend far beyond writing code — which, to date, has been the main way that software developers have leveraged AI. ... But agentic AI eliminates the need to spell out instructions or carry out manual actions entirely. With just a sentence or two, developers can prompt AI to perform complex, multi-step tasks. It's important to note that, for the most part, agentic AI use cases like those described above remain theoretical. Agentic AI remains a fairly new and quickly evolving field. The technology to do the sorts of things mentioned here theoretically exists, but existing tool sets for enabling specific agentic AI use cases are limited. ... It's also important to note that agentic AI poses new challenges for software developers. One is the risk that AI will make the wrong decisions. Like any LLM-based technology, AI agents can hallucinate, causing them to perform in undesirable ways. For this reason, it's tough to imagine entrusting high-stakes tasks to AI agents without requiring a human to supervise and validate them. Agentic AI also poses security risks. If agentic AI systems are compromised by threat actors, any tools or data that AI agents can access (such as source code) could also be exposed.


Modernizing Identity Security Beyond MFA

The next phase of identity security must focus on phishing-resistant authentication, seamless access, and decentralized identity management. The key principle guiding this transformation is a principle of phishing resistance by design. The adoption of FIDO2 and WebAuthn standards enables passwordless authentication using cryptographic key pairs. Because the private key never leaves the user’s device, attackers cannot intercept it. These methods eliminate the weakest link — human error — by ensuring that authentication remains secure even if users unknowingly interact with malicious links or phishing campaigns. ... By leveraging blockchain-based verified credentials — digitally signed, tamper-evident credentials issued by a trusted entity — wallets enable users to securely authenticate to multiple resources without exposing their personal data to third parties. These credentials can include identity proofs, such as government-issued IDs, employment verification, or certifications, which enable strong authentication. Using them for authentication reduces the risk of identity theft while improving privacy. Modern authentication must allow users to register once and reuse their credentials seamlessly across services. This concept reduces redundant onboarding processes and minimizes the need for multiple authentication methods. 


The Pros and Cons of Becoming a Government CIO

Seeking a job as a government CIO offers a chance to make a real impact on the lives of citizens, says Aparna Achanta, security architect and leader at IBM Consulting -- Federal. CIOs typically lead a wide range of projects, such as upgrading systems in education, public safety, healthcare, and other areas that provide critical public services. "They [government CIOs] work on large-scale projects that benefit communities beyond profits, which can be very rewarding and impactful," Achanta observed in an online interview. "The job also gives you an opportunity for leadership growth and the chance to work with a wide range of departments and people." ... "Being a government CIO might mean dealing with slow processes and bureaucracy," Achanta says. "Most of the time, decisions take longer because they have to go through several layers of approval, which can delay projects.” Government CIOs face unique challenges, including budget constraints, a constantly evolving mission, and increased scrutiny from government leaders and the public. "Public servants must be adept at change management in order to be able to pivot and implement the priorities of their administration to the best of their ability," Tamburrino says. Government CIOs are often frustrated by a hierarchy that runs at a far slower pace than their enterprise counterparts.


Why work-life balance in cybersecurity must start with executive support

Watching your mental and physical health is critical. Setting boundaries is something that helps the entire team, not just as a cyber leader. One rule we have in my team is that we do not use work chat after business hours unless there are critical events. Everyone needs a break and sometimes hearing a text or chat notification can create undue stress. Another critical aspect of being a cybersecurity professional is to hold to your integrity. People often do not like the fact that we have to monitor, report, and investigate systems and human behavior. When we get pushback for this with unprofessional behavior or defensiveness, it can often cause great personal stress. ... Executive leadership plays one of the most critical roles in supporting the CISO. Without executive level support, we would be crushed by the demands and the frequent conflicts of interest we experience. For example, project managers, CIOs, and other IT leadership roles might prioritize budget, cost, timelines, or other needs above security. A security professional prioritizes people (safety) and security above cost or timelines. The nature of our roles requires executive leadership support to balance the security and privacy risk (and what is acceptable to an executive). I think in several instances the executive board and CEOs understand this, but we are still a growing profession and there needs to be more education in this area.


Building Trust in Synthetic Media Through Responsible AI Governance

Relying solely on labeling tools faces multiple operational challenges. First, labeling tools often lack accuracy. This creates a paradox: inaccurate labels may legitimize harmful media, while unlabelled content may appear trustworthy. Moreover, users may not view basic AI edits, such as color correction, as manipulation, while opinions differ on changes like facial adjustments or filters. It remains unclear whether simple colour changes require a label, or if labeling should only occur when media is substantively altered or generated using AI. Similarly, many synthetic media artifacts may not fit the standard definition of pornography, such as images showing white substances on a person’s face; however, they can often be humiliating. ... Second, synthetic media use cases exist on a spectrum, and the presence of mixed AI- and human-generated content adds complexity and uncertainty in moderation strategies. For example, when moderating human-generated media, social media platforms only need to identify and remove harmful material. In the case of synthetic media, it is often necessary to first determine whether the content is AI-generated and then assess its potential harm. This added complexity may lead platforms to adopt overly cautious approaches to avoid liability. These challenges can undermine the effectiveness of labeling.


How future-ready leadership can power business value

Leadership in 2025 requires more than expertise; it demands adaptability, compassion, and tech fluency. “Leadership today isn’t about having all the answers; it’s about creating an environment where teams can sense, interpret, and act with speed, autonomy, and purpose,” said Govind. As the learning journey of Conduent pivots from stabilization to growth, he shared that the leaders need to do two key things in the current scenario: be human-centric and be digitally fluent. Similarly, Srilatha highlighted a fundamental shift happening among the leaders: “Leaders today must lead with both compassion and courage while taking tough decisions with kindness.” She also underlined the rising importance of the three Rs in modern leadership: Reskilling, resilience, and rethinking. ... Govind pointed to something deceptively simple: acting on feedback. “We didn’t just collect feedback, we analyzed sentiment, made changes, and closed the loop. That made stakeholders feel heard.” This approach led Conduent to experiment with program duration, where they went from 12 to 8 to 6 months.’ “Learning is a continuum, not a one-off event,” Govind added. ... Leadership development is no longer optional or one-size-fits-all. It’s a business imperative—designed around human needs and powered by digital fluency.


The CISO’s 5-step guide to securing AI operations

As AI applications extend to third parties, CISOs will need tailored audits of third-party data, AI security controls, supply chain security, and so on. Security leaders must also pay attention to emerging and often changing AI regulations. The EU AI Act is the most comprehensive to date, emphasizing safety, transparency, non-discrimination, and environmental friendliness. Others, such as the Colorado Artificial Intelligence Act (CAIA), may change rapidly as consumer reaction, enterprise experience, and legal case law evolves. CISOs should anticipate other state, federal, regional, and industry regulations. ... Established secure software development lifecycles should be amended to cover things such as AI threat modeling, data handling, API security, etc. ... End user training should include acceptable use, data handling, misinformation, and deepfake training. Human risk management (HRM) solutions from vendors such as Mimecast may be necessary to keep up with AI threats and customize training to different individuals and roles. ... Simultaneously, security leaders should schedule roadmap meetings with leading security technology partners. Come to these meetings prepared to discuss specific needs rather than sit through pie-in-the-sky PowerPoint presentations. CISOs should also ask vendors directly about how AI will be used for existing technology tuning and optimization. 


State of Open Source Report Reveals Low Confidence in Big Data Management

"Many organizations know what data they are looking for and how they want to process it but lack the in-house expertise to manage the platform itself," said Matthew Weier O'Phinney, Principal Product Manager at Perforce OpenLogic. "This leads to some moving to commercial Big Data solutions, but those that can't afford that option may be forced to rely on less-experienced engineers. In which case, issues with data privacy, inability to scale, and cost overruns could materialize." ... EOL operating system, CentOS Linux, showed surprisingly high usage, with 40% of large enterprises still using it in production. While CentOS usage declined in Europe and North America in the past year, it is still the third most used Linux distribution overall (behind Ubuntu and Debian), and the top distribution in Asia. For teams deploying EOL CentOS, 83% cited security and compliance as their biggest concern around their deployments. ... "Open source is the engine driving innovation in Big Data, AI, and beyond—but adoption alone isn't enough," said Gael Blondelle, Chief Membership Officer of the Eclipse Foundation. "To unlock its full potential, organizations need to invest in their people, establish the right processes, and actively contribute to the long-term sustainability and growth of the technologies they depend on."


Cybercrime goes corporate: A trillion-dollar industry undermining global security

The CaaS market is a booming economy in the shadows, driving annual revenues into billions. While precise figures are elusive due to its illicit nature, reports suggest it's a substantial and growing market. CaaS contributes significantly, and the broader cybersecurity services market is projected to reach hundreds of billions of dollars in the coming years. If measured as a country, cybercrime would already be the world's third-largest economy, with projected annual damages reaching USD 10.5 trillion by 2025, as per some cybersecurity ventures. This growth is fueled by the same principles that drive legitimate businesses: specialisation, efficiency, and accessibility. CaaS platforms function much like dark online marketplaces. They offer pre-made hacking kits, phishing templates, and even access to already compromised computer networks. These services significantly lower the entry barrier for aspiring criminals. ... Enterprises must recognise that attackers often hit multiple systems simultaneously—computers, user identities, and cloud environments. This creates significant "noise" if security tools operate in isolation. Relying on many disparate security products makes it difficult to gain a holistic view and understand that seemingly separate incidents are often part of a single, coordinated attack.


Modern apps broke observability. Here’s how we fix it.

For developers, figuring out where things went wrong is difficult. In a survey looking at the biggest challenges to observability, 58% of developers said that identifying blind spots is a top concern. Stack traces may help, but they rarely provide enough context to diagnose issues quickly; developers chase down screenshots, reproduce problems, and piece together clues manually using the metric and log data from APM tools; a bug that could take 30 minutes to fix ends up consuming days or weeks. Meanwhile, telemetry data accumulates in massive volumes—expensive to store and hard to interpret. Without tools to turn data into insight, you’re left with three problems: high bills, burnout, and time wasted fixing bugs—bugs that don’t have a major impact on core business functions or drive revenue when increasing developer efficiency is a top strategic goal at organizations. ... More than anything, we need a cultural change. Observability must be built into products from the start. That means thinking early about how we’ll track adoption, usage, and outcomes—not just deliver features. Too often, teams ship functionality only to find no one is using it. Observability should show whether users ever saw the feature, where they dropped off, or what got in the way. That kind of visibility doesn’t come from backend logs alone.

Daily Tech Digest - June 23, 2025


Quote for the day:

"Sheep are always looking for a new shepherd when the terrain gets rocky." -- Karen Marie Moning


The 10 biggest issues IT faces today

“The AI explosion and how quickly it has come upon us is the top issue for me,” says Mark Sherwood, executive vice president and CIO of Wolters Kluwer, a global professional services and software firm. “In my experience, AI has changed and progressed faster than anything I’ve ever seen.” To keep up with that rapid evolution, Sherwood says he is focused on making innovation part of everyday work for his engineering team. ... “Modern digital platforms generate staggering volumes of telemetry, logs, and metrics across an increasingly complex and distributed architecture. Without intelligent systems, IT teams drown in alert fatigue or miss critical signals amid the noise,” he explains. “What was once a manageable rules-based monitoring challenge has evolved into a big data and machine learning problem.” He continues, saying, “This shift requires IT organizations to rethink how they ingest, manage, and act upon operational data. It’s not just about observability; it’s about interpretability and actionability at scale. ... CIOs today are also paying closer attention to geopolitical news and determining what it means for them, their IT departments, and their organizations. “These are uncertain times geopolitically, and CIOs are asking how that will affect IT portfolios and budgets and initiatives,” Squeo says.


Clouded judgement: Resilience, risk and the rise of repatriation

While the findings reflect growing concern, they also highlight a strategic shift, with 78% of leaders now considering digital sovereignty when selecting tech partners, and 68% saying they will only adopt AI services where they have full certainty over data ownership. For some, the answer is to take back control. Cloud repatriation is gaining some traction, at least in terms of mindset, but as yet, this is not translating into a mass exodus from the hyperscalers. And yet, calls for digital sovereignty are getting louder. In Europe, the Euro-Stack open letter has reignited the debate, urging policymakers to champion a competitive, sovereign digital infrastructure. But while politics might be a trigger, the key question is not whether businesses are abandoning cloud (most aren’t) but whether the balance of cloud usage is changing, driven as much by cost as performance needs and rising regulatory risks. ... “Despite access to cloud cost-optimisation teams, there was limited room to reduce expenses,” says Jonny Huxtable, CEO of LinkPool. After assessing bare-metal and colocation options, LinkPool decided to move fully to Pulsant’s colocation service. The company claims the move achieved a 90% to 95% cost reduction alongside major performance improvements and enhanced disaster recovery capabilities.


Cookie management under the Digital Personal Data Protection Act, 2023

Effective cookie management under the DPDP Act, as detailed in the BRDCMS, requires real time updates to user preferences. Users must have access to a dedicated cookie preferences interface that allows them to modify or revoke their consent without undue complexity or delay. This interface should be easily accessible, typically through privacy settings or a dedicated cookie management dashboard. The real-time nature of these updates is crucial for maintaining compliance with the principles of consent as enshrined under the DPDP Act. When a user withdraws consent for specific cookie categories, the system must immediately cease the collection and processing of data through those cookies, ensuring that the user’s privacy preferences are respected without delay. Transparency is one of the fundamental pillars of the DPDP Act and extends to cookie usage disclosure. While the DPDP Act itself remains silent on specific cookie policies, the BRDCMS mandates the provision of a clear and accessible cookie policy. Organisations must provide clear and accessible cookie policies which outline the purposes of cookie usage, the data sharing practices and the implications of different consent choices. The cookie policy serves as a comprehensive resource enabling users to make informed decisions of their consent preferences. 


AI agents win over professionals - but only to do their grunt work, Stanford study finds

According to the report, the majority of workers are ready to embrace agents for the automation of low-stakes and repetitive tasks, "even after reflecting on potential job loss concerns and work enjoyment." Respondents said they hoped to focus on more engaging and important tasks, mirroring what's become something of a marketing mantra among big tech companies pushing AI agents: that these systems will free workers and businesses from drudgery, so they can focus on more meaningful work. The authors also noted "critical mismatches" between the tasks that AI agents are being deployed to handle -- such as software development and business analysis -- and the tasks that workers are actually looking to automate. ... The study could have big implications for the future of human-AI collaboration in the workplace. Using a metric that they call the Human Agency Scale (HAS), the authors found "that workers generally prefer higher levels of human agency than what experts deem technologically necessary." ... The report further showed that the rise of AI automation is causing a shift in the human skills that are most valued in the workplace: information-processing and analysis skills, the authors said, are becoming less valuable as machines become increasingly competent in these domains, while interpersonal skills -- including "assisting and caring for others" -- is more important than ever.


New OLTP: Postgres With Separate Compute and Storage

The traditional methods for integrating databases are complex and not suited to AI, Xin said. The challenge lies in integrating analytics and AI with transactional workloads. Consider what developers would do when adding a feature to a code base, Xin said in his keynote address at the Data + AI Summit. They’d create a new branch of the codebase and make changes to the new branch. They’d use that branch to check bugs, perform testing and so on. Xin said creating a new branch is an instant operation. What’s the equivalent for databases? You only clone your production databases. It might take days. How do you set up secure networking? How do you create ETL pipelines and log data from one to another? ... Streaming is now a first-class citizen in the enterprise, Mohan told me. The separation of compute and storage makes a difference. We are approaching an era when applications will scale infinitely, both in terms of the number of instances and their scale-out capabilities. And that leads us to new questions about how we start to think about evaluation, observability and semantics. Accuracy matters. ... ADP may have the world’s best payroll data, Mohan said, but then that data has to be processed through ETL into an analytics solution like Databricks. Then comes the analytics and the data science work. The customer has to perform a significant amount of data engineering work and preparation.


Can AI Save Us from AI? The High-Stakes Race in Cybersecurity

Reluctant executives and budget hawks can shoulder some of the responsibility for slow AI adoption, but they’re hardly the only barriers. Increasingly, employees are voicing legitimate concerns about surveillance, privacy and the long-term impact of automation on job security. At the same time, enterprises may face structural issues when it comes to integration: fragmented systems, a lack of data inventory and access controls, and other legacy architectures can also hinder the secure integration and scalability of AI-driven security solutions. Meanwhile, bad actors face none of these considerations. They have immediate, unfettered access to open-source AI tools, which can enhance the speed and force of an attack. They operate without AI tool guardrails, governance, oversight or ethical constraints. ... Insider threat detection is also maturing. AI models can detect suspicious behavior, such as unusual access to data, privilege changes or timing inconsistencies, that may indicate a compromised account or insider threat. Early adopters, such as financial institutions, are using behavioral AI to flag synthetic identities by spotting subtle deviations that traditional tools often lack. They can also monitor behavioral intent signals, such as a worker researching resignation policies before initiating mass file downloads, providing early warnings of potential data exfiltration.


The complexities of satellite compute

“In cellular communications on the ground, this was solved a few decades ago. But doing it in space, you have to have the computing horsepower to do those handoffs as well as the throughput capability.” This additional compute needs to be in "a radiation tolerant form, and in such a way that they don't consume too much power and generate too much heat to cause massive thermal problems on the satellites." In LEO, satellites face a barrage of radiation. "It's an environment that's very rich in protons," O'Neill says. "And protons can cause upsets in configuration registers, they can even cause latch-ups in certain integrated circuits." The need to be more radiation tolerant has also pushed the industry towards newer hardware as, the smaller the process node, the lower the operating voltage. "Reducing operating voltage makes you less susceptible to destructive effects," O'Neill explains. One issue, a single event latch up, sees the satellite conduct a lot of current from power to ground through the integrated circuit, potentially frying it. ... Modern integrated circuits are a lot less susceptible to these single-event latch-ups, but are not completely immune. "While the core of the circuit may be operating at a very low voltage, 0.7 or 0.8 volts, you still have I/O circuits in the integrated circuit that may be required to interoperate with other ICs at 3.3 volts or 2.5 volts," O'Neill adds.


How CISOs can justify security investments in financial terms

A common challenge we see is the absence of a formal ERM program, or the fragmentation of risk functions, where enterprise, cybersecurity, and third-party risks are evaluated using different impact criteria. This lack of alignment makes it difficult for CISOs to communicate effectively with the C-suite and board. Standardizing risk programs and using consistent impact criteria enables clearer risk comparisons, shared understanding, and more strategic decision-making. This challenge is further exacerbated by the rise of AI-specific regulations and frameworks, including the NIST AI Risk Management Framework, the EU AI Act, the NYC Bias Audit Law, and the Colorado Artificial Intelligence Act. ... Communicating security investments in clear, business-aligned risk terms—such as High, Medium, or Low—using agreed-upon impact criteria like financial exposure, operational disruption, reputational harm, and customer impact makes it significantly easier to justify spending and align with enterprise priorities. ... In our Virtual CISO engagements, we’ve found that a risk-based, outcome-driven approach is highly effective with executive leadership. We frame cyber risk tolerance in financial and operational terms, quantify the business value of proposed investments, and tie security initiatives directly to strategic objectives. 


From fear to fluency: Why empathy is the missing ingredient in AI rollouts

In the past, teams had time to adapt to new technologies. Operating systems or enterprise resource planning (ERP) tools evolved over years, giving users more room to learn these platforms and acquire the skills to use them. Unlike previous tech shifts, this one with AI doesn’t come with a long runway. Change arrives overnight, and expectations follow just as fast. Many employees feel like they’re being asked to keep pace with systems they haven’t had time to learn, let alone trust. A recent example would be ChatGPT reaching 100 million monthly active users just two months after launch. ... This underlines the emotional and behavioral complexity of adoption. Some people are naturally curious and quick to experiment with new technology while others are skeptical, risk-averse or anxious about job security. ... Adopting AI is not just a technical initiative, it’s a cultural reset, one that challenges leaders to show up with more empathy and not just expertise. Success depends on how well leaders can inspire trust and empathy across their organizations. The 4 E’s of adoption offer more than a framework. They reflect a leadership mindset rooted in inclusion, clarity and care. By embedding empathy into structure and using metrics to illuminate progress rather than pressure outcomes, teams become more adaptable and resilient.


Why networks need AIOps and predictive analytics

Predictive Analytics – a key capability of AIOps – forecasts future network performance and problems, enabling early intervention and proactive maintenance. Further, early prediction of bottlenecks or additional requirements helps to optimise the management of network resources. For example, when organisations have advance warning about traffic surges, they can allocate capacity to prevent congestion and outages, and enhance overall network performance. A range of mundane tasks, from incident response to work order generation to network configuration to proactive IT health checks and maintenance scheduling, can be automated with AIOps to reduce the load on IT staff and free them up to concentrate on more strategic activities. ... When traditional monitoring tools were unable to identify bottlenecks in a healthcare provider’s network that was seeing a slowdown in its electronic health records (EHR) system during busy hours, a switch to AIOps resolved the problem. By enabling observability across domains, the system highlighted that performance dipped when users logged in during shift changes. It also predicted slowdowns half an hour in advance and automatically provisioned additional resources to handle the surge in activity. The result was a 70 percent reduction in the most important EHR slowdowns, improvement in system responsiveness, and freeing up of IT human resources.