Daily Tech Digest - July 21, 2025


Quote for the day:

"Absolute identity with one's cause is the first and great condition of successful leadership." -- Woodrow Wilson


Is AI here to take or redefine your cybersecurity role?

Unlike Thibodeaux, Watson believes the level-one SOC analyst role “is going to be eradicated” by AI eventually. But he agrees with Thibodeaux that AI will move the table stakes forward on the skills needed to land a starter job in cyber. “The thing that will be cannibalized first is the sort of entry-level basic repeatable tasks, the things that people traditionally might have cut their teeth on in order to sort of progress to the next level. Therefore, the skill requirement to get a role in cybersecurity will be higher than what it has been traditionally,” says Watson. To help cyber professionals attain AI skills, CompTIA is developing a new certification program called SecAI. The course will target cyber people who already have three to four years of experience in a core cybersecurity job. The curriculum will include practical AI skills to proactively combat emerging cyber threats, integrating AI into security operations, defending against AI-driven attacks, and compliance for AI ethics and governance standards. ... As artificial intelligence takes over a rising number of technical cybersecurity tasks, Watson says one of the best ways security workers can boost their employment value is by sharpening their human skills like business literacy and communication: “The role is shifting to be one of partnering and advising because a lot of the technology is doing the monitoring, triaging, quarantining and so on.”


5 tips for building foundation models for AI

"We have to be mindful that, when it comes to training these models, we're doing it purposefully, because you can waste a lot of cycles on the exercise of learning," he said. "The execution of these models takes far less energy and resources than the actual training." OS usually feeds training data to its models in chunks. "Building up the label data takes quite a lot of time," he said. "You have to curate data across the country with a wide variety of classes that you're trying to learn from, so a different mix between urban and rural, and more." The organisation first builds a small model that uses several hundred examples. This approach helps to constrain costs and ensures OS is headed in the right direction. "Then we slowly build up that labelled set," Jethwa said. "I think we're now into the hundreds of thousands of labelled examples. Typically, these models are trained with millions of labelled datasets." While the organization's models are smaller, the results are impressive. "We're already outperforming the existing models that are out there from the large providers because those models are trained on a wider variety of images," he said. "The models might solve a wider variety of problems, but, for our specific domain, we outperform those models, even at a smaller scale."


Reduce, re-use, be frugal with AI and data

By being more selective with the data included in language models, businesses can better control their carbon emissions, limiting energy to be spent on the most important resources. In healthcare, for example, separating the most up-to-date medical information and guidance from the rest of the information on that topic will mean safer, more reliable and faster responses to patient treatment. ... Frugal AI means adopting an intelligent approach to data that focuses on using the most valuable information only. When businesses have a greater understanding of their data, how to label it, identify it and which teams are responsible for its deletion, then the storage of single use data can be significantly reduced. Only then can frugal AI systems be put in place, allowing businesses to adopt a resource aware and efficient approach to both their data consumption and AI usage. It’s important to stress here though that frugal AI doesn’t mean that the end results are lesser or of a reduced impact of technology, it means that the data that goes into AI is concentrated, smaller but just as impactful. Think of it like making a drink with extra concentrated squash. Frugal AI is that extra concentrate squash that puts data efficiency, consideration and strategy at the centre of an organisation’s AI ambitions.


Cyber turbulence ahead as airlines strap in for a security crisis

Although organizations have acknowledged the need to boost spending, progress remains to be made and new measures adopted. Legacy OT systems, which often lack security features such as automated patching and built-in encryption, should be addressed as a top priority. Although upgrading these systems can be costly, it is essential to prevent further disruptions and vulnerabilities. Mapping the aviation supply chain helps identify all key partners, which is important for conducting security audits and enforcing contractual cybersecurity requirements. This should be reinforced with multi-layered perimeter defenses, including encryption, firewalls, and intrusion detection systems, alongside zero-trust network segmentation to minimize the risk of attackers moving laterally within networks. Companies should implement real-time threat monitoring and response by deploying intrusion detection systems, centralizing analysis with SIEM, and maintaining a regularly tested incident response plan to identify, contain, and mitigate cyberattacks. ... One of the most important steps is to train all staff, including pilots and ground crews, to recognize scams. Since recent security breaches have mostly relied on social engineering tactics, this type of training is essential. A single phone call or a convincing email can be enough to trigger a data breach. 


What Does It Mean to Be Data-Driven?

A data-driven organization understands the value of its data and the best ways to capitalize on that value. Its data assets are aligned with its goals and the processes in place to achieve those goals. Protecting the company’s data assets requires incorporating governance practices to ensure managers and employees abide by privacy, security, and integrity guidelines. In addition to proper data governance, the challenges to implementing a data-driven infrastructure for business processes are data quality and integrity, data integration, talent acquisition, and change management. ... To ensure the success of their increasingly critical data initiatives, organizations look to the characteristics that led to effective adoption of data-driven programs at other companies. Management services firm KPMG identifies four key characteristics of successful data-driven initiatives: leadership involvement, investments in digital literacy, seamless access to data assets, and promotion and monitoring. ... While data-as-a-service (DaaS) emphasizes the sale of external data, data as a product (DaaP) considers all of a company’s data and the mechanisms in place for moving and storing the data as a product that internal operations rely on. The data team becomes a “vendor” serving “customers” throughout the organization.


AI Needs a Firewall and Cloud Needs a Rethink

Hyperscalers dominate most of enterprise IT today, and few are willing to challenge the status quo of cloud economics, artificial intelligence infrastructure and cybersecurity architectures. But Tom Leighton, co-founder and CEO of Akamai, does just that. He argues that the cloud has become bloated, expensive and overly centralized. The internet needs a new kind of infrastructure that is distributed, secure by design and optimized for performance at the edge, Leighton told Information Security Media Group. From edge-native AI inference and API security to the world's first firewall for artificial intelligence, Akamai is no longer just delivering content - it's redesigning the future. ... Among the most notable developments Leighton discussed was a new product category: an AI firewall. "People are training models on sensitive data and then exposing them to the public. That creates a new attack surface," Leighton said. "AI hallucinates. You never know what it's going to do. And the bad guys have figured out how to trick models into leaking data or doing bad things." Akamai's AI firewall monitors prompts and responses to prevent malicious prompts from manipulating the model and to avoid leaking sensitive data. "It can be implemented on-premises, in the cloud or within Akamai's platform, providing flexibility based on customer preference. 


Human and machine: Rediscovering our humanity in the age of AI

In an era defined by the rapid advancement of AI, machines are increasingly capable of tasks once considered uniquely human. ... Ethical decision-making, relationship building and empathy have been identified as the most valuable, both in our present reality and in the AI-driven future. ... As we navigate this era of AI, we must remember that technology is a tool, not a replacement for humanity. By embracing our capacity for creativity, connection and empathy, we can ensure that AI serves to enhance our humanity, not diminish it. This means accepting that preserving our humanness sometimes requires assistance. It means investing in education and training that fosters critical thinking, problem-solving and emotional intelligence. It means creating workplaces that value human connection and collaboration, where employees feel supported and empowered to bring their whole selves to work. And it means fostering a culture that celebrates creativity, innovation and the pursuit of knowledge. At a time when seven out of every ten companies are already using AI in at least one business function, let us embrace the challenge of this new era with both optimism and intentionality. Let us use AI to build a better future for ourselves and for generations to come – a future where technology serves humanity, and where every individual has the opportunity to thrive.


‘Interoperable but not identical’: applying ID standards across diverse communities

Exchanging knowledge and experiences with identity systems to improve future ID projects is central to the concept of ID4Africa’s mission. At this year’s ID4Africa AGM in Addis Ababa, Ethiopia, a tension was more evident than ever before between the quest for transferable insights and replicable successes and the uniqueness of each African nation. Thales Cybersecurity and Digital Identity Field Marketing Director for the Middle East and Africa Jean Lindner wrote in an emailed response to questions from Biometric Update following the event that the mix of attendees reflected that “every African country has its own diverse history or development maturity and therefore unique legacy identity systems, with different constraints. Let us recognize here there is no unique quick-fix to country-specific hurdles,” he says. The lessons of one country can only benefit another to the extent that common ground is identified. The development of the concept of digital public infrastructure has mapped out some common ground, but standards and collaborative organizations have a major role to play. Unfortunately, Stéphanie de Labriolle, executive director services at the Secure Identity Alliance says “the widespread lack of clarity around standards and what compliance truly entails” was striking at this year’s ID4Africa AGM.


The Race to Shut Hackers out of IoT Networks

Considered among the weakest links in enterprise networks, IoT devices are used across industries to perform critical tasks at a rapid rate. An estimated 57% of deployed units "are susceptible to medium- or high-severity attacks," according to research from security vendor Palo Alto Networks. IoT units are inherently vulnerable to security attacks, and enterprises are typically responsible for protecting against threats. Additionally, the IoT industry hasn't settled on standardized security, as time to market is sometimes a priority over standards. ... 3GPP developed RedCap to provide a viable option for enterprises seeking a higher-performance, feature-rich 5G alternative to traditional IoT connectivity options such as low-power WANs (LPWANs). LPWANs are traditionally used to transmit limited data over low-speed cellular links at a low cost. In contrast, RedCap offers moderate bandwidth and enhanced features for more demanding use cases, such as video surveillance cameras, industrial control systems in manufacturing and smart building infrastructure. ... From a security standpoint, RedCap inherits strong capabilities in 5G, such as authentication, encryption and integrity protection. It can also be supplemented at application and device levels for a multilayered security approach.


Architecting the MVP in the Age of AI

A key aspect of architecting an MVP is forming and testing hypotheses about how the system will meet its QARs. Understanding and prioritizing these QARs is not an easy task, especially for teams without a lot of architecture experience. AI can help when teams provide context by describing the QARs that the system must satisfy in a prompt and asking the LLM to suggest related requirements. The LLM may suggest additional QARs that the team may have overlooked. For example, if performance, security, and usability are the top 3 QARs that a team is considering, an LLM may suggest looking at scalability and resilience as well. This can be especially helpful for people who are new to software architecture. ... Sometimes validating the AI’s results may require more skills than would be required to create the solution from scratch, just as is sometimes the case when seeing someone else’s code and realizing that it’s better than what you would have developed on your own. This can be an effective way to improve developers’ skills, provided that the code is good. AI can also help you find and fix bugs in your code that you may miss. Beyond simple code inspection, experimentation provides a means of validating the results produced by AI. In fact, experimentation is the only real way to validate it, as some researchers have discovered.

Daily Tech Digest - July 20, 2025


Quote for the day:

“Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through.” -- Jarod Kintz


Lean Agents: The Agile Workforce of Agentic AI

Organizations are tired of gold‑plated mega systems that promise everything and deliver chaos. Enter frameworks like AutoGen and LangGraph, alongside protocols such as MCP; all enabling Lean Agents to be spun up on-demand, plug into APIs, execute a defined task, then quietly retire. This is a radical departure from heavyweight models that stay online indefinitely, consuming compute cycles, budget, and attention. ... Lean Agents are purpose-built AI workers; minimal in design, maximally efficient in function. Think of them as stateless or scoped-memory micro-agents: they wake when triggered, perform a discrete task like summarizing an RFP clause or flagging anomalies in payments and then gracefully exit, freeing resources and eliminating runtime drag. Lean Agents are to AI what Lambda functions are to code: ephemeral, single-purpose, and cloud-native. They may hold just enough context to operate reliably but otherwise avoid persistent state that bloats memory and complicates governance. ... From technology standpoint, combined with the emerging Model‑Context Protocol (MCP) give engineering teams the scaffolding to create discoverable, policy‑aware agent meshes. Lean Agents transform AI from a monolithic “brain in the cloud” into an elastic workforce that can be budgeted, secured, and reasoned about like any other microservice.


Cloud Repatriation Is Harder Than You Think

Repatriation is not simply a reverse lift-and-shift process. Workloads that have developed in the cloud often have specific architectural dependencies that are not present in on-premises environments. These dependencies can include managed services like identity providers, autoscaling groups, proprietary storage solutions, and serverless components. As a result, moving a workload back on-premises typically requires substantial refactoring and a thorough risk assessment. Untangling these complex layers is more than just a migration; it represents a structural transformation. If the service expectations are not met, repatriated applications may experience poor performance or even fail completely. ... You cannot migrate what you cannot see. Accurate workload planning relies on complete visibility, which includes not only documented assets but also shadow infrastructure, dynamic service relationships, and internal east-west traffic flows. Static tools such as CMDBs or Visio diagrams often fall out of date quickly and fail to capture real-time behavior. These gaps create blind spots during the repatriation process. Application dependency mapping addresses this issue by illustrating how systems truly interact at both the network and application layers. Without this mapping, teams risk disrupting critical connections that may not be evident on paper.


AI Agents Are Creating a New Security Nightmare for Enterprises and Startups

The agentic AI landscape is still in its nascent stages, making it the opportune moment for engineering leaders to establish robust foundational infrastructure. While the technology is rapidly evolving, the core patterns for governance are familiar: Proxies, gateways, policies, and monitoring. Organizations should begin by gaining visibility into where agents are already running autonomously — chatbots, data summarizers, background jobs — and add basic logging. Even simple logs like “Agent X called API Y” are better than nothing. Routing agent traffic through existing proxies or gateways in a reverse mode can eliminate immediate blind spots. Implementing hard limits on timeouts, max retries, and API budgets can prevent runaway costs. While commercial AI gateway solutions are emerging, such as Lunar.dev, teams can start by repurposing existing tools like Envoy, HAProxy, or simple wrappers around LLM APIs to control and observe traffic. Some teams have built minimal “LLM proxies” in days, adding logging, kill switches, and rate limits. Concurrently, defining organization-wide AI policies — such as restricting access to sensitive data or requiring human review for regulated outputs — is crucial, with these policies enforced through the gateway and developer training.


The Evolution of Software Testing in 2025: A Comprehensive Analysis

The testing community has evolved beyond the conventional shift-left and shift-right approaches to embrace what industry leaders term "shift-smart" testing. This holistic strategy recognizes that quality assurance must be embedded throughout the entire software development lifecycle, from initial design concepts through production monitoring and beyond. While shift-left testing continues to emphasize early validation during development phases, shift-right testing has gained equal prominence through its focus on observability, chaos engineering, and real-time production testing. ... Modern testing platforms now provide insights into how testing outcomes relate to user churn rates, release delays, and net promoter scores, enabling organizations to understand the direct business impact of their quality assurance investments. This data-driven approach transforms testing from a technical activity into a business-critical function with measurable value.Artificial intelligence platforms are revolutionizing test prioritization by predicting where failures are most likely to occur, allowing testing teams to focus their efforts on the highest-risk areas. ... Modern testers are increasingly taking on roles as quality coaches, working collaboratively with development teams to improve test design and ensure comprehensive coverage aligned with product vision. 


7 lessons I learned after switching from Google Drive to a home NAS

One of the first things I realized was that a NAS is only as fast as the network it’s sitting on. Even though my NAS had decent specs, file transfers felt sluggish over Wi-Fi. The new drives weren’t at fault, but my old router was proving to be a bottleneck. Once I wired things up and upgraded my router, the difference was night and day. Large files opened like they were local. So, if you’re expecting killer performance, make sure to look out for the network box, because it perhaps matters just as much  ... There was a random blackout at my place, and until then, I hadn’t hooked my NAS to a power backup system. As a result, the NAS shut off mid-transfer without warning. I couldn’t tell if I had just lost a bunch of files or if the hard drives had been damaged too — and that was a fair bit scary. I couldn’t let this happen again, so I decided to connect the NAS to an uninterruptible power supply unit (UPS).  ... I assumed that once I uploaded my files to Google Drive, they were safe. Google would do the tiring job of syncing, duplicating, and mirroring on some faraway data center. But in a self-hosted environment, you are the one responsible for all that. I had to put safety nets in place for possible instances where a drive fails or the NAS dies. My current strategy involves keeping some archived files on a portable SSD, a few important folders synced to the cloud, and some everyday folders on my laptop set up to sync two-way with my NAS.


5 key questions your developers should be asking about MCP

Despite all the hype about MCP, here’s the straight truth: It’s not a massive technical leap. MCP essentially “wraps” existing APIs in a way that’s understandable to large language models (LLMs). Sure, a lot of services already have an OpenAPI spec that models can use. For small or personal projects, the objection that MCP “isn’t that big a deal” is pretty fair. ... Remote deployment obviously addresses the scaling but opens up a can of worms around transport complexity. The original HTTP+SSE approach was replaced by a March 2025 streamable HTTP update, which tries to reduce complexity by putting everything through a single /messages endpoint. Even so, this isn’t really needed for most companies that are likely to build MCP servers. But here’s the thing: A few months later, support is spotty at best. Some clients still expect the old HTTP+SSE setup, while others work with the new approach — so, if you’re deploying today, you’re probably going to support both. Protocol detection and dual transport support are a must. ... However, the biggest security consideration with MCP is around tool execution itself. Many tools need broad permissions to be useful, which means sweeping scope design is inevitable. Even without a heavy-handed approach, your MCP server may access sensitive data or perform privileged operations


Firmware Vulnerabilities Continue to Plague Supply Chain

"The major problem is that the device market is highly competitive and the vendors [are] competing not only to the time-to-market, but also for the pricing advantages," Matrosov says. "In many instances, some device manufacturers have considered security as an unnecessary additional expense." The complexity of the supply chain is not the only challenge for the developers of firmware and motherboards, says Martin Smolár, a malware researcher with ESET. The complexity of the code is also a major issue, he says. "Few people realize that UEFI firmware is comparable in size and complexity to operating systems — it literally consists of millions of lines of code," he says. ... One practice that hampers security: Vendors will often try to only distribute security fixes under a non-disclosure agreement, leaving many laptop OEMs unaware of potential vulnerabilities in their code. That's the exact situation that left Gigabyte's motherboards with a vulnerable firmware version. Firmware vendor AMI fixed the issues years ago, but the issues have still not propagated out to all the motherboard OEMs. ... Yet, because firmware is always evolving as better and more modern hardware is integrated into motherboards, the toolset also need to be modernized, Cobalt's Ollmann says.


Beyond Pilots: Reinventing Enterprise Operating Models with AI

Historically, AI models required vast volumes of clean, labeled data, making insights slow and costly. Large language models (LLMs) have upended this model, pre-trained on billions of data points and able to synthesize organizational knowledge, market signals, and past decisions to support complex, high-stakes judgment. AI is becoming a powerful engine for revenue generation through hyper-personalization of products and services, dynamic pricing strategies that react to real-time market conditions, and the creation of entirely new service offerings. More significantly, AI is evolving from completing predefined tasks to actively co-creating superior customer experiences through sophisticated conversational commerce platforms and intelligent virtual agents that understand context, nuance, and intent in ways that dramatically enhance engagement and satisfaction. ... In R&D and product development, AI is revolutionizing operating models by enabling faster go-to-market cycles. AI can simulate countless design alternatives, optimize complex supply chains in real time, and co-develop product features based on deep analysis of customer feedback and market trends. These systems can draw from historical R&D successes and failures across industries, accelerating innovation by applying lessons learned from diverse contexts and domains.


Alternative clouds are on the rise

Alt clouds, in their various forms, represent a departure from the “one size fits all” mentality that initially propelled the public cloud explosion. These alternatives to the Big Three prioritize specificity, specialization, and often offer an advantage through locality, control, or workload focus. Private cloud, epitomized by offerings from VMware and others, has found renewed relevance in a world grappling with escalating cloud bills, data sovereignty requirements, and unpredictable performance from shared infrastructure. The old narrative that “everything will run in the public cloud eventually” is being steadily undermined as organizations rediscover the value of dedicated infrastructure, either on-premises or in hosted environments that behave, in almost every respect, like cloud-native services. ... What begins as cost optimization or risk mitigation can quickly become an administrative burden, soaking up engineering time and escalating management costs. Enterprises embracing heterogeneity have no choice but to invest in architects and engineers who are familiar not only with AWS, Azure, or Google, but also with VMware, CoreWeave, a sovereign European platform, or a local MSP’s dashboard. 


Making security and development co-owners of DevSecOps

In my view, DevSecOps should be structured as a shared responsibility model, with ownership but no silos. Security teams must lead from a governance and risk perspective, defining the strategy, standards, and controls. However, true success happens when development teams take ownership of implementing those controls as part of their normal workflow. In my career, especially while leading security operations across highly regulated industries, including finance, telecom, and energy, I’ve found this dual-ownership model most effective. ... However, automation without context becomes dangerous, especially closer to deployment. I’ve led SOC teams that had to intervene because automated security policies blocked deployments over non-exploitable vulnerabilities in third-party libraries. That’s a classic example where automation caused friction without adding value. So the balance is about maturity: automate where findings are high-confidence and easily fixable, but maintain oversight in phases where risk context matters, like release gates, production changes, or threat hunting. ... Tools are often dropped into pipelines without tuning or context, overwhelming developers with irrelevant findings. The result? Fatigue, resistance, and workarounds.

Daily Tech Digest - July 19, 2025


Quote for the day:

"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks


AI-Driven Threat Hunting: Catching Zero Day Exploits Before They Strike

Cybersecurity has come a long way from the days of simple virus scanners and static firewalls. Signature-based defenses were sufficient to detect known malware during the past era. Zero-day exploits operate as unpredictable threats that traditional security tools fail to detect. The technology sector saw Microsoft and Google rush to fix more than dozens of zero day vulnerabilities which attackers used in the wild during 2023. The consequences reach extreme levels because a single security breach results in major financial losses and immediate destruction of corporate reputation. AI functions as a protective measure that addresses weaknesses in human capabilities and outdated system limitations. The system analyzes enormous amounts of data from network traffic and timestamps and IP logs, and other inputs to detect security risks. ... So how does AI pull this off? It’s all about finding the weird stuff. Network traffic packets follow regular patterns, but zero-day exploits cause packet size fluctuations and timing irregularities. AI detects anomalies by comparing data against its knowledge base of typical behavior patterns. Autoencoders function as neural networks that learn to recreate data during operation. When an autoencoder fails to rebuild data, it automatically identifies the suspicious activity.


How AI is changing the GRC strategy

CISOs are in a tough spot because they have a dual mandate to increase productivity and leverage this powerful emerging technology, while still maintaining governance, risk and compliance obligations, according to Rich Marcus, CISO at AuditBoard. “They’re being asked to leverage AI or help accelerate the adoption of AI in organizations to achieve productivity gains. But don’t let it be something that kills the business if we do it wrong,” says Marcus. ... “The really important thing to be successful with managing AI risk is to approach the situation with a collaborative mindset and broadcast the message to folks that we’re all in it together and you’re not here to slow them down.” ... Ultimately, the task is for security leaders to apply a security lens to AI using governance and risk as part of the broader GRC framework in the organization. “A lot of organizations will have a chief risk officer or someone of that nature who owns the broader risk across the environment, but security should have a seat at the table,” Norton says. “These days, it’s no longer about CISOs saying ‘yes’ or ‘no’. It’s more about us providing visibility of the risks involved in doing certain things and then allowing the organization and the senior executives to make decisions around those risks.”


Three Invisible Hurdles to Innovation

Innovation changes internal power dynamics. The creation of a new line of business leads to a legacy line of business declining or, at an extreme, shutting down or being spun out. One part of the organization wins; another loses. Why would a department put forward or support a proposal that would put that department out of business or lead it to lose organizational influence? That means senior leaders might never see a proposal that’s good for the whole organization if it is bad for one part of the organization. ... While the natural language interface of OpenAI’s ChatGPT was easy the first time I used it, I wasn’t sure what to do with a large language model (LLM). First I tried to mimic a Google search, and then jumped in and tried to design a course from scratch. The lack of artfully constructed prompts on first-generation technology led to predictably disappointing results. For DALL-E, I tried to prove that AI couldn’t match the skills of my daughter, a skilled artist. Seeing mediocre results left me feeling smug, reaffirming my humanity. ... Social identity theory suggests that individuals often merge their personal identity with the offerings of the company at which they work. Ask them who they are, and they respond with what they do: “I’m a newspaper guy.” So imagine how Gilbert’s message landed with his employees who worked to produce a print newspaper every day.


Beyond Code Generation: How Asimov is Transforming Engineering Team Collaboration

The conventional wisdom around AI coding assistance has been misguided. Research shows that engineers spend only about 10% of their time writing code, while the remaining 70% is devoted to understanding existing systems, debugging issues, and collaborating with teammates on intricate problems. This reality exposes a significant gap in current AI tooling, which predominantly focuses on code generation rather than comprehension. “Engineers don’t spend most of their time writing code. They spend most of their time understanding code and collaborating with other teammates on hard problems,” explains the Reflection team. This insight drives Asimov’s unique approach to engineering productivity. ... As engineering teams grapple with increasingly complex systems and distributed architectures, tools like Asimov offer a glimpse into a future where AI serves as a genuine collaborative partner rather than just a code completion engine. By focusing on understanding and context rather than mere generation, Asimov addresses the actual pain points that slow down engineering teams. The tool is currently in early access, with Reflection AI selecting teams for initial deployment. 


Data Management Makes or Breaks AI Success for SLGs

“Many agencies start their AI journeys with a specific use case, something simple like a chatbot,” says John Whippen, regional vice president for U.S. public sector at Snowflake. “As they show the value of those individual use cases, they’ll attempt to make it more prevalent across an entire agency or department.” Especially in populous jurisdictions, readying data for large-scale AI initiatives can be challenging. Nevertheless, that initial data consolidation, governance and management are central to cross-agency AI deployments, according to Whippen and other industry experts. ... Most state agencies operate on a hybrid cloud model. Many of them work with multiple hyperscalers and likely will for the foreseeable future. This creates potential data fragmentation. However, where the data is stored is not necessarily as important as the ability to centralize how it is accessed, managed and manipulated. “Today, you can extract all of that data much more easily, from a user interface perspective, and manipulate it the way you want, then put it back into the system of record, and you don't need a data scientist for that,” says Mike Hurt, vice president of state and local government and education for ServiceNow. “It's not your grandmother's way of tagging anymore.”


The Role Of Empathy In Effective Leadership

To maintain good working relationships with others, you must be willing to understand their experiences and perspectives. As we all know, everyone sees the world through a different lens. Even if you don’t fully align with others’ worldviews, as a leader, you must create an environment where individuals feel heard and respected. ... Operate with perspective and cultivate inclusive practices. In a way, empathy is being able to see through the eyes of others. Many of the unspoken rules of the corporate world are based on the experience of white males in the workforce. Considering the countless other demographics in the modern workforce, most of these nuances or patterns are outdated, exclusionary, counterproductive, and even harmful to some people. Can you identify any unspoken rules you enforce or adhere to within your career? Sometimes, they are hard to spot right away. In my research as a DEI professional, I’ve encountered many unspoken cultural rules that don’t consider the perspective of diverse groups. ... Empathetic leaders create more harmonious workplaces and inspire their teams to perform better. Creating an atmosphere of acceptance and understanding sets the stage for healthier dynamics. In questioning the status quo, you root out any counterproductive trends in company culture that need addressing.


New Research on the Link Between Learning and Innovation

Cognitive neuroscience confirms what experienced leaders intuitively know: Our brains need structured breaks to turn experiences into actionable knowledge. Just as sleep helps consolidate daily experiences into long-term memory, structured reflection allows teams to integrate insights gained during exploration phases into strategies and plans. Without these deliberate rhythms, teams risk becoming overwhelmed by continual information intake—akin to endlessly inhaling without pausing to exhale—leading to confusion and burnout. By intentionally embedding reflective pauses within structured learning cycles, teams can harness their full innovative potential. ... You can think of a team’s learning activities as elements of a musical masterpiece. Just as great compositions—like Beethoven’s Fifth Symphony—skillfully balance moments of tension with moments of powerful resolution, effective team learning thrives on the structured interplay between building up and then releasing tension. Harmonious learning occurs when complementary activities, such as team reflection and external expert consultations, reinforce one another, creating moments of clarity and alignment. Conversely, dissonance arises when conflicting activities, like simultaneous experimentation and detailed planning, collide and cause confusion.


Optimizing Search Systems: Balancing Speed, Relevance, and Scalability

Efficiently managing geospatial search queries on Uber Eats is crucial, as users often seek outnearby restaurants or grocery stores. To achieve this, Uber Eats uses geo-sharding, a technique that ensures all relevant data for a specific location is stored within a single shard. This minimizes query overhead and eliminates inefficiencies caused by fetching and aggregating results from multiple shards. Additionally, geo sharding allows first-pass ranking to happen directly on data nodes, improving speed and accuracy. Uber Eats primarily employs two geo sharding techniques: latitude sharding and hex sharding. Latitude sharding divides the world into horizontal bands, with each band representing a distinct shard. Shard ranges are computed offline using Spark jobs, which first divide the map into thousands of narrow latitude stripes and then group adjacent stripes to create shards of roughly equal size. Documents falling on shard boundaries are indexed in both neighboring shards to prevent missing results. One key advantage of latitude sharding is its ability to distribute traffic efficiently across different time zones. Given that Uber Eats experiences peak activity following a "sun pattern" with high demand during the day and lower demand at night, this method helps prevent excessive load on specific shards. 


How to beat the odds in tech transformation

Creating an enterprise-wide technology solution requires defining a scope that’s ambitious and quickly actionable and has an underlying objective to keep your customers and organization on board throughout the project. ... Technology may seem even more autonomous, but tech transformations are not. They depend on the full engagement and alignment of people across your organization, starting with leadership. First, senior leaders need to be educated so they clearly understand not just the features of the new technology but more so the business benefits. This will motivate them to champion engagement and adoption throughout the organization. ... Even the best-planned journeys to new frontiers will run into unexpected challenges. For instance, while we had extensively planned for customer migration during our tech transformation, the effort required to make it go as quickly and smoothly as possible was greater than expected. After all, we provide mission-critical solutions, so customers didn’t simply want to know we had validated a new product. They wanted reassurance we had validated their specific use cases. In response, we doubled down on resources to give them enhanced confidence. As mentioned, we introduced a protocol of parallel systems, running the old and new simultaneously. 


Leadership vs. Management in Project Management: Walking the Tightrope Between Vision and Execution

At its core, management is about control. It’s the science of organising tasks, allocating resources, and ensuring deliverables meet specifications. Managers thrive on Gantt charts, risk matrices, and status reports. They’re the architects of order in a world prone to chaos.. It’s the science of organising tasks, allocating resources, and ensuring deliverables meet specifications. Managers thrive on Gantt charts, risk matrices, and status reports. They’re the architects of order in a world prone to chaos. Leadership, on the other hand, is about inspiration. It’s the art of painting a compelling vision, rallying teams around a shared purpose, and navigating uncertainty with grit. ... A project manager’s IQ might land them the job, but their EQ determines their success. Leadership in project management isn’t just about charisma—it’s about sensing unspoken tensions, motivating burnt-out teams, and navigating stakeholder egos. ... The debate between leadership and management is a false dichotomy. Like yin and yang, they’re interdependent forces. A project manager who only manages becomes a bureaucrat, obsessed with checkboxes but blind to the bigger picture. One who only leads becomes a dreamer, chasing visions without a roadmap. The future belongs to hybrids—those who can rally a team with a compelling vision and deliver a flawless product on deadline.

Daily Tech Digest - July 18, 2025


Quote for the day:

"It is during our darkest moments that we must focus to see the light." -- Aristotle Onassis




Machine unlearning gets a practical privacy upgrade

Machine unlearning, which refers to strategies for removing the influence of specific training data from a model, has emerged to fill the gap. But until now, most approaches have either been slow and costly or fast but lacking formal guarantees. A new framework called Efficient Unlearning with Privacy Guarantees (EUPG) tries to solve both problems at once. Developed by researchers at the Universitat Rovira i Virgili in Catalonia, EUPG offers a practical way to forget data in machine learning models with provable privacy protections and a lower computational cost. Rather than wait for a deletion request and then scramble to rework a model, EUPG starts by preparing the model for unlearning from the beginning. The idea is to first train on a version of the dataset that has been transformed using a formal privacy model, either k-anonymity or differential privacy. This “privacy-protected” model doesn’t memorize individual records, but still captures useful patterns. ... The researchers acknowledge that extending EUPG to large language models and other foundation models will require further work, especially given the scale of the data and the complexity of the architectures involved. They suggest that for such systems, it may be more practical to apply privacy models directly to the model parameters during training, rather than to the data beforehand.


Emerging Cloaking-as-a-Service Offerings are Changing Phishing Landscape

Cloaking-as-a-service offerings – increasingly powered by AI – are “quietly reshaping how phishing and fraud infrastructure operates, even if it hasn’t yet hit mainstream headlines,” SlashNext’s Research Team wrote Thursday. “In recent years, threat actors have begun leveraging the same advanced traffic-filtering tools once used in shady online advertising, using artificial intelligence and clever scripting to hide their malicious payloads from security scanners and show them only to intended victims.” ... The newer cloaking services offer advanced detection evasion techniques, such as JavaScript fingerprinting, device and network profiling, machine learning analysis and dynamic content swapping, and put them into user-friendly platforms that hackers and anyone else can subscribe to, SlashNext researchers wrote. “Cybercriminals are effectively treating their web infrastructure with the same sophistication as their malware or phishing emails, investing in AI-driven traffic filtering to protect their scams,” they wrote. “It’s an arms race where cloaking services help attackers control who sees what online, masking malicious activity and tailoring content per visitor in real time. This increases the effectiveness of phishing sites, fraudulent downloads, affiliate fraud schemes and spam campaigns, which can stay live longer and snare more victims before being detected.”


You’re Not Imagining It: AI Is Already Taking Tech Jobs

It’s difficult to pinpoint the exact motivation behind job cuts at any given company. The overall economic environment could also be a factor, marked by uncertainties heightened by President Donald Trump’s erratic tariff plans. Many companies also became bloated during the pandemic, and recent layoffs could still be trying to correct for overhiring. According to one report released earlier this month by the executive coaching firm Challenger, Gray and Christmas, AI may be more of a scapegoat than a true culprit for layoffs: Of more than 286,000 planned layoffs this year, only 20,000 were related to automation, and of those, only 75 were explicitly attributed to artificial intelligence, the firm found. Plus, it’s challenging to measure productivity gains caused by AI, said Stanford’s Chen, because while not every employee may have AI tools officially at their disposal at work, they do have unauthorized consumer versions that they may be using for their jobs. While the technology is beginning to take a toll on developers in the tech industry, it’s actually “modestly” created more demand for engineers outside of tech, said Chen. That’s because other sectors, like manufacturing, finance, and healthcare, are adopting AI tools for the first time, so they are adding engineers to their ranks in larger numbers than before, according to her research.


The architecture of culture: People strategy in the hospitality industry

Rewards and recognitions are the visible tip of the iceberg, but culture sits below the surface. And if there’s one thing that I’ve learned over the years, it’s that culture only sticks when it’s felt, not just said. Not once a year, but every single day. Hilton’s consistent recognition as a Great Place to Work® globally and in India stems from our unwavering support and commitment to helping people thrive, both personally and professionally. ... What has sustained our culture through this growth is a focus on the everyday. It is not big initiatives alone that shape how people feel at work, but the smaller, consistent actions that build trust over time. Whether it is how a team huddle is run, how feedback is received, or how farewells are handled, we treat each moment as an opportunity to reinforce care and connection. ... Equally vital is cultivating culturally agile, people-first leaders. South Asia’s diversity, across language, faith, generation, and socio-economic background, demands leadership that is both empathetic and inclusive. We’re working to embed this cultural intelligence across the employee journey, from hiring and onboarding to ongoing development and performance conversations, so that every team member feels genuinely seen and supported.


Capturing carbon - Is DAC a perfect match for data centers?

The commercialization of DAC, however, faces several significant challenges. One primary obstacle is navigating different compliance requirements across jurisdictions. Certification standards vary significantly between regions like Canada, the UK, and Europe, necessitating differing approaches in each jurisdiction. However, while requiring adjustments, Chadwick argues that these differences are not insurmountable and are merely part of the scaling process. Beyond regulatory and deployment concerns, achieving cost reductions is a significant challenge. DAC remains highly expensive, costing an average of $680 per ton to produce in 2024, according to Supercritical, a carbon removal marketplace. In comparison, Biochar has an average price of $165 per ton, and enhanced rock weathering has an average price of $310 per ton. In addition, the complexity of DAC means up-front costs are much higher than those of alternative forms of carbon removal. An average DAC unit comprises air-intake manifolds, absorption and desorption towers, liquid-handling tanks, and bespoke site-specific engineering. DAC also requires significant amounts of power to operate. Recent studies have shown that the energy consumption of fans in DAC plants can range from 300 to 900 kWh per ton of CO2 captures, which represents between 20 - 40 percent of total DAC system energy usage.


Rethinking Risk: The Role of Selective Retrieval in Data Lake Strategies

Selective retrieval works because it bridges the gap between data engineering complexity and security usability. It gives teams options without asking them to reinvent the wheel. It also avoids the need to bring in external tools during a breach investigation, which can introduce latency, complexity, or worse, gaps in the chain of custody. What’s compelling about this approach is that it doesn’t require businesses to abandon existing tools or re-architect their infrastructure. ... This model is especially relevant for mid-size IT teams who want to cover their audit requirements, but don’t have a 24/7 security operations center. It’s also useful in regulated sectors such as healthcare, financial services, and manufacturing where data retention isn’t optional, but real-time analysis for everything isn’t practical. ... Data volumes are continuing to rise. As organizations face high costs and fatigue, those that thrive will be the ones that treat storage and retrieval as distinct functions. The ability to preserve signal without incurring ongoing noise costs will become a critical enabler for everything from insider threat detection to regulatory compliance. Selective retrieval isn’t just about saving money. It’s about regaining control over data sprawl, aligning IT resources with actual risk, and giving teams the tools they need to ask, and answer, better questions.


Manufactured Madness: How To Protect Yourself From Insane AIs

The core of the problem lies in a well-intentioned but flawed premise: that we can and should micromanage an AI’s output to prevent any undesirable outcomes. These “guardrails” are complex sets of rules and filters designed to stop the model from generating hateful, biased, dangerous, or factually incorrect information. In theory, this is a laudable goal. In practice, it has created a generation of AIs that prioritize avoiding offense over providing truth. ... Compounding the problem of forced outcomes is a crisis of quality. The data these models are trained on is becoming increasingly polluted. In the early days, models were trained on a vast, curated slice of the pre-AI internet. But now, as AI-generated content inundates every corner of the web, new models are being trained on the output of their predecessors. ... Given this landscape, the burden of intellectual safety now falls squarely on the user. We can no longer afford to treat AI-generated text with passive acceptance. We must become active, critical consumers of its output. Protecting yourself requires a new kind of digital literacy. First and foremost: Trust, but verify. Always. Never take a factual claim from an AI at face value. Whether it’s a historical date, a scientific fact, a legal citation, or a news summary, treat it as an unconfirmed rumor until you have checked it against a primary source.


6 Key Lessons for Businesses that Collect and Use Consumer Data

Ensure your privacy notice properly discloses consumer rights, including the right to access, correct, and delete personal data stored and collected by businesses, and the right to opt-out of the sale of personal data and targeted advertising. Mechanisms for exercising those rights must work properly, with a process in place to ensure a timely response to consumer requests. ... Another issue that the Connecticut AG raised was that the privacy notice was “largely unreadable.” While privacy notices address legal rights and obligations, you should avoid using excessive legal jargon to the extent possible and use clear, simple language to notify consumers about their rights and the mechanisms for exercising those rights. In addition, be as succinct as possible to help consumers locate the information they need to understand and exercise applicable rights. ... The AG provided guidance that under the CTDPA, if a business uses cookie banners to permit a consumer to opt-out of some data processing, such as targeted advertising, the consumer must be provided with a symmetrical choice. In other words, it has to be as clear and as easy for the consumer to opt out of such use of their personal data as it would be to opt in. This includes making the options to accept all cookies and to reject all cookies visible on the screen at the same time and in the same color, font, and size.


How agentic AI Is reshaping execution across BFSI

Several BFSI firms are already deploying agentic models within targeted areas of their operations. The results are visible in micro-interventions that improve process flow and reduce manual load. Autonomous financial advisors, powered by agentic logic, are now capable of not just reacting to user input, but proactively monitoring markets, assessing customer portfolios, and recommending real-time changes.. In parallel, agentic systems are transforming customer service by acting as intelligent finance assistants, guiding users through complex processes such as mortgage applications or claims filing. ... For Agentic AI to succeed, it must be integrated into operational strategy. This begins by identifying workflows where progress depends on repetitive human actions that follow predictable logic. These are often approval chains, verifications, task handoffs, and follow-ups. Once identified, clear rules need to be defined. What conditions trigger an action? When is escalation required? What qualifies as a closed loop? The strength of an agentic system lies in its ability to act with precision, but that depends on well-designed logic and relevant signals. Data access is equally important. Agentic AI systems require context. That means drawing from activity history, behavioural cues, workflow states and timing patterns. 


Open Source Is Too Important To Dilute

The unfortunate truth is that these criteria don’t apply in every use case. We’ve seen vendors build traction with a truly open project. Then, worried about monetization or competition, they relicense it under a “source-available” model with restrictions, like “no commercial use” or “only if you’re not a competitor.” But that’s not how open source works. Software today is deeply interconnected. Every project — no matter how small or isolated — relies on dependencies, which rely on other dependencies, all the way down the chain. A license that restricts one link in that chain can break the whole thing. ... Forks are how the OSS community defends itself. When HashiCorp relicensed Terraform under the Business Source License (BSL) — blocking competitors from building on the tooling — the community launched OpenTofu, a fork under an OSI-approved license, backed by major contributors and vendors. Redis’ transition away from Berkeley Software Distribution (BSD) to a proprietary license was a business decision. But it left a hole — and the community forked it. That fork became Valkey, a continuation of the project stewarded by the people and platforms who relied on it most. ... The open source brand took decades to build. It’s one of the most successful, trusted ideas in software history. But it’s only trustworthy because it means something.

Daily Tech Digest - July 17, 2025


Quote for the day:

"Accept responsibility for your life. Know that it is you who will get you where you want to go, no one else." -- Les Brown


AI That Thinks Like Us: New Model Predicts Human Decisions With Startling Accuracy

“We’ve created a tool that allows us to predict human behavior in any situation described in natural language – like a virtual laboratory,” says Marcel Binz, who is also the study’s lead author. Potential applications range from analyzing classic psychological experiments to simulating individual decision-making processes in clinical contexts – for example, in depression or anxiety disorders. The model opens up new perspectives in health research in particular – for example, by helping us understand how people with different psychological conditions make decisions. ... “We’re just getting started and already seeing enormous potential,” says institute director Eric Schulz. Ensuring that such systems remain transparent and controllable is key, Binz adds – for example, by using open, locally hosted models that safeguard full data sovereignty. ...  The researchers are convinced: “These models have the potential to fundamentally deepen our understanding of human cognition – provided we use them responsibly.” That this research is taking place at Helmholtz Munich rather than in the development departments of major tech companies is no coincidence. “We combine AI research with psychological theory – and with a clear ethical commitment,” says Binz. “In a public research environment, we have the freedom to pursue fundamental cognitive questions that are often not the focus in industry.” 


Collaboration is Key: How to Make Threat Intelligence Work for Your Organization

A challenge with joining threat intelligence sharing communities is that a lot of threat information is generated and needs to be shared daily. For already resource-stretched teams, it can be extra work to pull together, share a threat intelligence report, and filter through the incredible volumes of information. Particularly for smaller organizations, it can be a bit like drinking from a firehose. In this context, an advanced threat intelligence platform (TIP) can be invaluable. A TIP has the capabilities to collect, filter, and prioritize data, helping security teams to cut through the noise and act on threat intelligence faster. TIPs can also enrich the data with additional contexts, such as threat actor TTPs (tactics, techniques and procedures), indicators of compromise (IOCs), and potential impact, making it easier to understand and respond to threats. Furthermore, an advanced TIP can have the capability to automatically generate threat intelligence reports, ready to be securely shared within the organization’s threat intelligence sharing community Secure threat intelligence sharing reduces risk, accelerates response and builds resilience across entire ecosystems. If you’re not already part of a trusted intelligence-sharing community, it is time to join. And if you are, do contribute your own valuable threat information. In cybersecurity, we’re only as strong as our weakest link and our most silent partner.


Google study shows LLMs abandon correct answers under pressure, threatening multi-turn AI systems

The researchers first examined how the visibility of the LLM’s own answer affected its tendency to change its answer. They observed that when the model could see its initial answer, it showed a reduced tendency to switch, compared to when the answer was hidden. This finding points to a specific cognitive bias. As the paper notes, “This effect – the tendency to stick with one’s initial choice to a greater extent when that choice was visible (as opposed to hidden) during the contemplation of final choice – is closely related to a phenomenon described in the study of human decision making, a choice-supportive bias.” ... “This finding demonstrates that the answering LLM appropriately integrates the direction of advice to modulate its change of mind rate,” the researchers write. However, they also discovered that the model is overly sensitive to contrary information and performs too large of a confidence update as a result. ... Fortunately, as the study also shows, we can manipulate an LLM’s memory to mitigate these unwanted biases in ways that are not possible with humans. Developers building multi-turn conversational agents can implement strategies to manage the AI’s context. For example, a long conversation can be periodically summarized, with key facts and decisions presented neutrally and stripped of which agent made which choice.


Building stronger engineering teams with aligned autonomy

Autonomy in the absence of organizational alignment can cause teams to drift in different directions, build redundant or conflicting systems, or optimize for local success at the cost of overall coherence. Large organizations with multiple engineering teams can be especially prone to these kinds of dysfunction. The promise of aligned autonomy is that it resolves this tension. It offers “freedom within a framework,” where engineers understand the why behind their work but have the space to figure out the how. Aligned autonomy builds trust, reduces friction, and accelerates delivery by shifting control from a top-down approach to a shared, mission-driven one. ... For engineering teams, their north star might be tied to business outcomes, such as enabling a frictionless customer onboarding experience, reducing infrastructure costs by 30%, or achieving 99.9% system uptime. ... Autonomy without feedback is a blindfolded sprint, and just as likely to end in disaster. Feedback loops create connections between independent team actions and organizational learning. They allow teams to evaluate whether their decisions are having the intended impact and to course-correct when needed. ... In an aligned autonomy model, teams should have the freedom to choose their own path — as long as everyone’s moving in the same direction. 


How To Build a Software Factory

Of the three components, process automation is likely to present the biggest hurdle. Many organizations are happy to implement continuous integration and stop there, but IT leaders should strive to go further, Reitzig says. One example is automating underlying infrastructure configuration. If developers don’t have to set up testing or production environments before deploying code, they get a lot of time back and don’t need to wait for resources to become available. Another is improving security. Though there’s value in continuous integration automatically checking in, reviewing and integrating code, stopping there can introduce vulnerabilities. “This is a system for moving defects into production faster, because configuration and testing are still done manually,” Reitzig says. “It takes too long, it’s error-prone, and the rework is a tax on productivity.” ... While the software factory standardizes much of the development process, it’s not monolithic. “You need different factories to segregate domains, regulations, geographic regions and the culture of what’s acceptable where,” Yates says. However, even within domains, software can serve vastly different purposes. For instance, human resources might seek to develop applications that approve timesheets or security clearances. Managing many software factories can pose challenges, and organizations would be wise to identify redundancies, Reitzig says. 


Why Scaling Makes Microservices Testing Exponentially Harder

You’ve got clean service boundaries and focused test suites, and each team can move independently. Testing a payment service? Spin up the service, mock the user service and you’re done. Simple. This early success creates a reasonable assumption that testing complexity will scale proportionally with the number of services and developers. After all, if each service can be tested in isolation and you’re growing your engineering team alongside your services, why wouldn’t the testing effort scale linearly? ... Mocking strategies that work beautifully at a small scale become maintenance disasters at a large scale. One API change can require updating dozens of mocks across different codebases, owned by different teams. ... Perhaps the most painful scaling challenge is what happens to shared staging environments. With a few services, staging works reasonably well. Multiple teams can coordinate deployments, and when something breaks, the culprit is usually obvious. But as you add services and teams, staging becomes either a traffic jam or a free-for-all — and both are disastrous. ... The teams that successfully scale microservices testing have figured out how to break this exponential curve. They’ve moved away from trying to duplicate production environments for testing and are instead focused on creating isolated slices of their production-like environment.


India’s Digital Infrastructure Is Going Global. What Kind of Power Is It Building?

India’s digital transformation is often celebrated as a story of frugal innovation. DPI systems have allowed hundreds of millions to access ID, receive payments, and connect to state services. In a country of immense scale and complexity, this is an achievement. But these systems do more than deliver services; they configure how the state sees its citizens: through biometric records, financial transactions, health databases, and algorithmic scoring systems. ... India’s digital infrastructure is not only reshaping domestic governance, but is being actively exported abroad. From vaccine certification platforms in Sri Lanka and the Philippines to biometric identity systems in Ethiopia, elements of India Stack are being adopted across Asia and Africa. The Modular Open Source Identity Platform (MOSIP), developed in Bangalore, is now in use in more than twenty countries. Indeed, India is positioning itself as a provider of public infrastructure for the Global South, offering a postcolonial alternative to both Silicon Valley’s corporate-led ecosystems and China’s surveillance-oriented platforms. ... It would be a mistake to reduce India’s digital governance model to either a triumph of innovation or a tool of authoritarian control. The reality is more of a fragmented and improvisational technopolitics. These platforms operate across a range of sectors and are shaped by diverse actors including bureaucrats, NGOs, software engineers, and civil society activists.


Chris Wright: AI needs model, accelerator, and cloud flexibility

As the model ecosystem has exploded, platform providers face new complexity. Red Hat notes that only a few years ago, there were limited AI models available under open user-friendly licenses. Most access was limited to major cloud platforms offering GPT-like models. Today, the situation has changed dramatically. “There’s a pretty good set of models that are either open source or have licenses that make them usable by users”, Wright explains. But supporting such diversity introduces engineering challenges. Different models require different model customization and inference optimizations, and platforms must balance performance with flexibility. ... The new inference capabilities, delivered with the launch of Red Hat AI Inference Server, enhance Red Hat’s broader AI vision. This spans multiple offerings: Red Hat OpenShift AI, Red Hat Enterprise Linux AI, and the aforementioned Red Hat AI Inference Server under the Red Hat AI umbrella. Along the are embedded AI capabilities across Red Hat’s hybrid cloud offerings with Red Hat Lightspeed. These are not simply single products but a portfolio that Red Hat can evolve based on customer and market demands. This modular approach allows enterprises to build, deploy, and maintain models based on their unique use case, across their infrastructure. This from edge deployments to centralized cloud inference, while maintaining consistency in management and operations.


Data Protection vs. Cyber Resilience: Mastering Both in a Complex IT Landscape

Traditional disaster recovery (DR) approaches designed for catastrophic events and natural disasters are still necessary today, but companies must implement a more security-event-oriented approach on top of that. Legacy approaches to disaster recovery are insufficient in an environment that is rife with cyberthreats as these approaches focus on infrastructure, neglecting application-level dependencies and validation processes. Further, threat actors have moved beyond interrupting services and now target data to poison, encrypt or exfiltrate it. As such, cyber resilience needs more than a focus on recovery. It requires the ability to recover with data integrity intact and prevent the same vulnerabilities that caused the incident in the first place. ... Failover plans, which are common in disaster recovery, focus on restarting Virtual Machines (VMs) sequentially but lack comprehensive validation. Application-centric recovery runbooks, however, provide a step-by-step approach to help teams manage and operate technology infrastructure, applications and services. This is key to validating whether each service, dataset and dependency works correctly in a staged and sequenced approach. This is essential as businesses typically rely on numerous critical applications, requiring a more detailed and validated recovery process.


Rethinking Distributed Computing for the AI Era

The problem becomes acute when we examine memory access patterns. Traditional distributed computing assumes computation can be co-located with data, minimizing network traffic—a principle that has guided system design since the early days of cluster computing. But transformer architectures require frequent synchronization of gradient updates across massive parameter spaces—sometimes hundreds of billions of parameters. The resulting communication overhead can dominate total training time, explaining why adding more GPUs often yields diminishing returns rather than the linear scaling expected from well-designed distributed systems. ... The most promising approaches involve cross-layer optimization, which traditional systems avoid when maintaining abstraction boundaries. For instance, modern GPUs support mixed-precision computation, but distributed systems rarely exploit this capability intelligently. Gradient updates might not require the same precision as forward passes, suggesting opportunities for precision-aware communication protocols that could reduce bandwidth requirements by 50% or more. ... These architectures often have non-uniform memory hierarchies and specialized interconnects that don’t map cleanly onto traditional distributed computing abstractions. 

Daily Tech Digest - July 16, 2025


Quote for the day:

"Whatever the mind of man can conceive and believe, it can achieve." -- Napoleon Hill


The Seventh Wave: How AI Will Change the Technology Industry

AI presents three threats to the software industry: Cheap code: TuringBots, using generative AI to create software, threatens the low-code/no-code players. Cheap replacement: Software systems, be they CRM or ERP, are structured databases – repositories for client records or financial records. Generative AI, coupled with agentic AI, holds out the promise of a new way to manage this data, opening the door to an enterprising generation of tech companies that will offer AI CRM, AI financials, AI database, AI logistics, etc. ... Better functionality: AI-native systems will continually learn and flex and adapt without millions of dollars of consulting and customization. They hold the promise of being up to date and always ready to take on new business problems and challenges without rebuilds. When the business and process changes, the tech will learn and change. ... On one hand, the legacy software systems that PwC, Deloitte, and others have implemented for decades and that comprise much of their expertise will be challenged in the short term and shrink in the long term. Simultaneously, there will be a massive demand for expertise in AI. Cognizant, Capgemini, and others will be called on to help companies implement AI computing systems and migrate away from legacy vendors. Forrester believes that the tech services sector will grow by 3.6% in 2025.


Software Security Imperative: Forging a Unified Standard of Care

The debate surrounding liability in the open source ecosystem requires careful consideration. Imposing direct liability on individual open source maintainers could stifle the very innovation that drives the industry forward. It risks dismantling the vast ecosystem that countless developers rely upon. ... The software bill of materials (SBOM) is rapidly transitioning from a nascent concept to an undeniable business necessity. As regulatory pressures intensify, driven by a growing awareness of software supply chain risks, a robust SBOM strategy is becoming critical for organizational survival in the tech landscape. But the value of SBOMs extends far beyond a single software development project. While often considered for open source software, an SBOM provides visibility across the entire software ecosystem. It illuminates components from third-party commercial software, helps manage data across merged projects and validates code from external contributors or subcontractors — any code integrated into a larger system. ... The path to a secure digital future requires commitment from all stakeholders. Technology companies must adopt comprehensive security practices, regulators must craft thoughtful policies that encourage innovation while holding organizations accountable and the broader ecosystem must support the collaborative development of practical and effective standards.


The 4 Types of Project Managers

The prophet type is all about taking risks and pushing boundaries. They don’t play by the rules; they make their own. And they’re not just thinking outside the box, they’re throwing the box away altogether. It’s like a rebel without a cause, except this rebel has a cause – growth. These visionaries thrive in ambiguity and uncertainty, seeing potential where others see only chaos or impossibility. They often face resistance from more conservative team members who prefer predictable outcomes and established processes. ... The gambler type is all about taking chances and making big bets. They’re not afraid to roll the dice and see what happens. And while they play by the rules of the game, they don’t have a good business case to back up their bets. It’s like convincing your boss to let you play video games all day because you just have a hunch it will improve your productivity. But don’t worry, the gambler type isn’t just blindly throwing money around. They seek to engage other members of the organization who are also up for a little risk-taking. ... The expert type is all about challenging the existing strategy by pursuing growth opportunities that lie outside the current strategy, but are backed up by solid quantitative evidence. They’re like the detectives of the business world, following the clues and gathering the evidence to make their case. And while the growth opportunities are well-supported and should be feasible, the challenge is getting other organizational members to listen to their advice.


OpenAI, Google DeepMind and Anthropic sound alarm: ‘We may be losing the ability to understand AI’

The unusual cooperation comes as AI systems develop new abilities to “think out loud” in human language before answering questions. This creates an opportunity to peek inside their decision-making processes and catch harmful intentions before they turn into actions. But the researchers warn this transparency is fragile and could vanish as AI technology advances. ... “AI systems that ‘think’ in human language offer a unique opportunity for AI safety: we can monitor their chains of thought for the intent to misbehave,” the researchers explain. But they emphasize that this monitoring capability “may be fragile” and could disappear through various technological developments. ... When AI models misbehave — exploiting training flaws, manipulating data, or falling victim to attacks — they often confess in their reasoning traces. The researchers found examples where models wrote phrases like “Let’s hack,” “Let’s sabotage,” or “I’m transferring money because the website instructed me to” in their internal thoughts. Jakub Pachocki, OpenAI’s chief technology officer and co-author of the paper, described the importance of this capability in a social media post. “I am extremely excited about the potential of chain-of-thought faithfulness & interpretability. It has significantly influenced the design of our reasoning models, starting with o1-preview,” he wrote.


Unmasking AsyncRAT: Navigating the labyrinth of forks

We believe that the groundwork for AsyncRAT was laid earlier by the Quasar RAT, which has been available on GitHub since 2015 and features a similar approach. Both are written in C#; however, their codebases differ fundamentally, suggesting that AsyncRAT was not just a mere fork of Quasar, but a complete rewrite. A fork, in this context, is a personal copy of someone else’s repository that one can freely modify without affecting the original project. The main link that ties them together lies in the custom cryptography classes used to decrypt the malware configuration settings. ... Ever since it was released to the public, AsyncRAT has spawned a multitude of new forks that have built upon its foundation. ... It’s also worth noting that DcRat’s plugin base builds upon AsyncRAT and further extends its functionality. Among the added plugins are capabilities such as webcam access, microphone recording, Discord token theft, and “fun stuff”, a collection of plugins used for joke purposes like opening and closing the CD tray, blocking keyboard and mouse input, moving the mouse, turning off the monitor, etc. Notably, DcRat also introduces a simple ransomware plugin that uses the AES-256 cipher to encrypt files, with the decryption key distributed only once the plugin has been requested.


Repatriating AI workloads? A hefty data center retrofit awaits

CIOs with in-house AI ambitions need to consider compute and networking, in addition to power and cooling, Thompson says. “As artificial intelligence moves from the lab to production, many organizations are discovering that their legacy data centers simply aren’t built to support the intensity of modern AI workloads,” he says. “Upgrading these facilities requires far more than installing a few GPUs.” Rack density is a major consideration, Thompson adds. Traditional data centers were designed around racks consuming 5 to 10 kilowatts, but AI workloads, particularly model training, push this to 50 to 100 kilowatts per rack. “Legacy facilities often lack the electrical backbone, cooling systems, and structural readiness to accommodate this jump,” he says. “As a result, many CIOs are facing a fork in the road: retrofit, rebuild, or rent.” Cooling is also an important piece of the puzzle because not only does it enable AI, but upgrades there can help pay for other upgrades, Thompson says. “By replacing inefficient air-based systems with modern liquid-cooled infrastructure, operators can reduce parasitic energy loads and improve power usage effectiveness,” he says. “This frees up electrical capacity for productive compute use — effectively allowing more business value to be generated per watt. For facilities nearing capacity, this can delay or eliminate the need for expensive utility upgrades or even new construction.”


Burnout, budgets and breaches – how can CISOs keep up?

As ever, collaboration in a crisis is critical. Security teams working closely with backup, resilience and recovery functions are better able to absorb shocks. When the business is confident in its ability to restore operations, security professionals face less pressure and uncertainty. This is also true for communication, especially post-breach. Organisations need to be transparent about how they’re containing the incident and what’s being done to prevent recurrence. ... There is also an element of the blame game going on, with everyone keen to avoid responsibility for an inevitable cyber breach. It’s much easier to point fingers at the IT team than to look at the wider implications or causes of a cyber-attack. Even something as simple as a phishing email can cause widespread problems and is something that individual employees must be aware of. ... To build and retain a capable cybersecurity team amid the widening skills gap, CISOs must lead a shift in both mindset and strategy. By embedding resilience into the core of cyber strategy, CISOs can reduce the relentless pressure to be perfect and create a healthier, more sustainable working environment. But resilience isn’t built in isolation. To truly address burnout and retention, CISOs need C-suite support and cultural change. Cybersecurity must be treated as a shared business-critical priority, not just an IT function. 


We Spend a Lot of Time Thinking Through the Worst - The Christ Hospital Health Network CISO

“We’ve spent a lot of time meeting with our business partners and talking through, ‘Hey, how would this specific part of the organization be able to run if this scenario happened?’” On top of internal preparations, Kobren shares that his team monitors incidents across the industry to draw lessons from real-world events. Given the unique threat landscape, he states, “We do spend a lot of time thinking through those scenarios because we know it’s one of the most attacked industries.” Moving forward, Kobren says that healthcare consistently ranks at the top when it comes to industries frequently targeted by cyberattacks. He elaborates that attackers have recognized the high impact of disrupting hospital services, making ransom demands more effective because organizations are desperate to restore operations. ... To strengthen identity security, Kobren follows a strong, centralized approach to access control. He mentions that the organization aims to manage “all access to all systems,” including remote and cloud-based applications. By integrating services with single sign-on (SSO), the team ensures control over user credentials: “We know that we are in control of your username and password.” This allows them to enforce password complexity, reset credentials when needed, and block accounts if security is compromised. Ultimately, Kobren states, “We want to be in control of as much of that process as possible” when it comes to identity management.


AI requires mature choices from companies

According to Felipe Chies of AWS, elasticity is the key to a successful AI infrastructure. “If you look at how organizations set up their systems, you see that the computing time when using an LLM can vary greatly. This is because the model has to break down the task and reason logically before it can provide an answer. It’s almost impossible to predict this computing time in advance,” says Chies. This requires an infrastructure that can handle this unpredictability: one that is quickly scalable, flexible, and doesn’t involve long waits for new hardware. Nowadays, you can’t afford to wait months for new GPUs, says Chies. The reverse is also important: being able to scale back. ... Ruud Zwakenberg of Red Hat also emphasizes that flexibility is essential in a world that is constantly changing. “We cannot predict the future,” he says. “What we do know for sure is that the world will be completely different in ten years. At the same time, nothing fundamental will change; it’s a paradox we’ve been seeing for a hundred years.” For Zwakenberg, it’s therefore all about keeping options open and being able to anticipate and respond to unexpected developments. According to Zwakenberg, this requires an infrastructural basis that is not rigid, but offers room for curiosity and innovation. You shouldn’t be afraid of surprises. Embrace surprises, Zwakenberg explains. 


Prompt-Based DevOps and the Reimagined Terminal

New AI-driven CLI tools prove there's demand for something more intelligent in the command line, but most are limited — they're single-purpose apps tied to individual model providers instead of full environments. They are geared towards code generation, not infrastructure and production work. They hint at what's possible, but don't deliver the deeper integration AI-assisted development needs. That's not a flaw, it's an opportunity to rethink the terminal entirely. The terminal's core strengths — its imperative input and time-based log of actions — make it the perfect place to run not just commands, but launch agents. By evolving the terminal to accept natural language input, be more system-aware, and provide interactive feedback, we can boost productivity without sacrificing the control engineers rely on. ... With prompt-driven workflows, they don't have to switch between dashboards or copy-paste scripts from wikis because they simply describe what they want done, and an agent takes care of the rest. And because this is taking place in the terminal, the agent can use any CLI to gather and analyze information from across data sources. The result? Faster execution, more consistent results, and fewer mistakes. That doesn't mean engineers are sidelined. Instead, they're overseeing more projects at once. Their role shifts from doing every step to supervising workflows — monitoring agents, reviewing outputs, and stepping in when human judgment is needed.