Pages

Daily Tech Digest - July 19, 2025


Quote for the day:

"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks


AI-Driven Threat Hunting: Catching Zero Day Exploits Before They Strike

Cybersecurity has come a long way from the days of simple virus scanners and static firewalls. Signature-based defenses were sufficient to detect known malware during the past era. Zero-day exploits operate as unpredictable threats that traditional security tools fail to detect. The technology sector saw Microsoft and Google rush to fix more than dozens of zero day vulnerabilities which attackers used in the wild during 2023. The consequences reach extreme levels because a single security breach results in major financial losses and immediate destruction of corporate reputation. AI functions as a protective measure that addresses weaknesses in human capabilities and outdated system limitations. The system analyzes enormous amounts of data from network traffic and timestamps and IP logs, and other inputs to detect security risks. ... So how does AI pull this off? It’s all about finding the weird stuff. Network traffic packets follow regular patterns, but zero-day exploits cause packet size fluctuations and timing irregularities. AI detects anomalies by comparing data against its knowledge base of typical behavior patterns. Autoencoders function as neural networks that learn to recreate data during operation. When an autoencoder fails to rebuild data, it automatically identifies the suspicious activity.


How AI is changing the GRC strategy

CISOs are in a tough spot because they have a dual mandate to increase productivity and leverage this powerful emerging technology, while still maintaining governance, risk and compliance obligations, according to Rich Marcus, CISO at AuditBoard. “They’re being asked to leverage AI or help accelerate the adoption of AI in organizations to achieve productivity gains. But don’t let it be something that kills the business if we do it wrong,” says Marcus. ... “The really important thing to be successful with managing AI risk is to approach the situation with a collaborative mindset and broadcast the message to folks that we’re all in it together and you’re not here to slow them down.” ... Ultimately, the task is for security leaders to apply a security lens to AI using governance and risk as part of the broader GRC framework in the organization. “A lot of organizations will have a chief risk officer or someone of that nature who owns the broader risk across the environment, but security should have a seat at the table,” Norton says. “These days, it’s no longer about CISOs saying ‘yes’ or ‘no’. It’s more about us providing visibility of the risks involved in doing certain things and then allowing the organization and the senior executives to make decisions around those risks.”


Three Invisible Hurdles to Innovation

Innovation changes internal power dynamics. The creation of a new line of business leads to a legacy line of business declining or, at an extreme, shutting down or being spun out. One part of the organization wins; another loses. Why would a department put forward or support a proposal that would put that department out of business or lead it to lose organizational influence? That means senior leaders might never see a proposal that’s good for the whole organization if it is bad for one part of the organization. ... While the natural language interface of OpenAI’s ChatGPT was easy the first time I used it, I wasn’t sure what to do with a large language model (LLM). First I tried to mimic a Google search, and then jumped in and tried to design a course from scratch. The lack of artfully constructed prompts on first-generation technology led to predictably disappointing results. For DALL-E, I tried to prove that AI couldn’t match the skills of my daughter, a skilled artist. Seeing mediocre results left me feeling smug, reaffirming my humanity. ... Social identity theory suggests that individuals often merge their personal identity with the offerings of the company at which they work. Ask them who they are, and they respond with what they do: “I’m a newspaper guy.” So imagine how Gilbert’s message landed with his employees who worked to produce a print newspaper every day.


Beyond Code Generation: How Asimov is Transforming Engineering Team Collaboration

The conventional wisdom around AI coding assistance has been misguided. Research shows that engineers spend only about 10% of their time writing code, while the remaining 70% is devoted to understanding existing systems, debugging issues, and collaborating with teammates on intricate problems. This reality exposes a significant gap in current AI tooling, which predominantly focuses on code generation rather than comprehension. “Engineers don’t spend most of their time writing code. They spend most of their time understanding code and collaborating with other teammates on hard problems,” explains the Reflection team. This insight drives Asimov’s unique approach to engineering productivity. ... As engineering teams grapple with increasingly complex systems and distributed architectures, tools like Asimov offer a glimpse into a future where AI serves as a genuine collaborative partner rather than just a code completion engine. By focusing on understanding and context rather than mere generation, Asimov addresses the actual pain points that slow down engineering teams. The tool is currently in early access, with Reflection AI selecting teams for initial deployment. 


Data Management Makes or Breaks AI Success for SLGs

“Many agencies start their AI journeys with a specific use case, something simple like a chatbot,” says John Whippen, regional vice president for U.S. public sector at Snowflake. “As they show the value of those individual use cases, they’ll attempt to make it more prevalent across an entire agency or department.” Especially in populous jurisdictions, readying data for large-scale AI initiatives can be challenging. Nevertheless, that initial data consolidation, governance and management are central to cross-agency AI deployments, according to Whippen and other industry experts. ... Most state agencies operate on a hybrid cloud model. Many of them work with multiple hyperscalers and likely will for the foreseeable future. This creates potential data fragmentation. However, where the data is stored is not necessarily as important as the ability to centralize how it is accessed, managed and manipulated. “Today, you can extract all of that data much more easily, from a user interface perspective, and manipulate it the way you want, then put it back into the system of record, and you don't need a data scientist for that,” says Mike Hurt, vice president of state and local government and education for ServiceNow. “It's not your grandmother's way of tagging anymore.”


The Role Of Empathy In Effective Leadership

To maintain good working relationships with others, you must be willing to understand their experiences and perspectives. As we all know, everyone sees the world through a different lens. Even if you don’t fully align with others’ worldviews, as a leader, you must create an environment where individuals feel heard and respected. ... Operate with perspective and cultivate inclusive practices. In a way, empathy is being able to see through the eyes of others. Many of the unspoken rules of the corporate world are based on the experience of white males in the workforce. Considering the countless other demographics in the modern workforce, most of these nuances or patterns are outdated, exclusionary, counterproductive, and even harmful to some people. Can you identify any unspoken rules you enforce or adhere to within your career? Sometimes, they are hard to spot right away. In my research as a DEI professional, I’ve encountered many unspoken cultural rules that don’t consider the perspective of diverse groups. ... Empathetic leaders create more harmonious workplaces and inspire their teams to perform better. Creating an atmosphere of acceptance and understanding sets the stage for healthier dynamics. In questioning the status quo, you root out any counterproductive trends in company culture that need addressing.


New Research on the Link Between Learning and Innovation

Cognitive neuroscience confirms what experienced leaders intuitively know: Our brains need structured breaks to turn experiences into actionable knowledge. Just as sleep helps consolidate daily experiences into long-term memory, structured reflection allows teams to integrate insights gained during exploration phases into strategies and plans. Without these deliberate rhythms, teams risk becoming overwhelmed by continual information intake—akin to endlessly inhaling without pausing to exhale—leading to confusion and burnout. By intentionally embedding reflective pauses within structured learning cycles, teams can harness their full innovative potential. ... You can think of a team’s learning activities as elements of a musical masterpiece. Just as great compositions—like Beethoven’s Fifth Symphony—skillfully balance moments of tension with moments of powerful resolution, effective team learning thrives on the structured interplay between building up and then releasing tension. Harmonious learning occurs when complementary activities, such as team reflection and external expert consultations, reinforce one another, creating moments of clarity and alignment. Conversely, dissonance arises when conflicting activities, like simultaneous experimentation and detailed planning, collide and cause confusion.


Optimizing Search Systems: Balancing Speed, Relevance, and Scalability

Efficiently managing geospatial search queries on Uber Eats is crucial, as users often seek outnearby restaurants or grocery stores. To achieve this, Uber Eats uses geo-sharding, a technique that ensures all relevant data for a specific location is stored within a single shard. This minimizes query overhead and eliminates inefficiencies caused by fetching and aggregating results from multiple shards. Additionally, geo sharding allows first-pass ranking to happen directly on data nodes, improving speed and accuracy. Uber Eats primarily employs two geo sharding techniques: latitude sharding and hex sharding. Latitude sharding divides the world into horizontal bands, with each band representing a distinct shard. Shard ranges are computed offline using Spark jobs, which first divide the map into thousands of narrow latitude stripes and then group adjacent stripes to create shards of roughly equal size. Documents falling on shard boundaries are indexed in both neighboring shards to prevent missing results. One key advantage of latitude sharding is its ability to distribute traffic efficiently across different time zones. Given that Uber Eats experiences peak activity following a "sun pattern" with high demand during the day and lower demand at night, this method helps prevent excessive load on specific shards. 


How to beat the odds in tech transformation

Creating an enterprise-wide technology solution requires defining a scope that’s ambitious and quickly actionable and has an underlying objective to keep your customers and organization on board throughout the project. ... Technology may seem even more autonomous, but tech transformations are not. They depend on the full engagement and alignment of people across your organization, starting with leadership. First, senior leaders need to be educated so they clearly understand not just the features of the new technology but more so the business benefits. This will motivate them to champion engagement and adoption throughout the organization. ... Even the best-planned journeys to new frontiers will run into unexpected challenges. For instance, while we had extensively planned for customer migration during our tech transformation, the effort required to make it go as quickly and smoothly as possible was greater than expected. After all, we provide mission-critical solutions, so customers didn’t simply want to know we had validated a new product. They wanted reassurance we had validated their specific use cases. In response, we doubled down on resources to give them enhanced confidence. As mentioned, we introduced a protocol of parallel systems, running the old and new simultaneously. 


Leadership vs. Management in Project Management: Walking the Tightrope Between Vision and Execution

At its core, management is about control. It’s the science of organising tasks, allocating resources, and ensuring deliverables meet specifications. Managers thrive on Gantt charts, risk matrices, and status reports. They’re the architects of order in a world prone to chaos.. It’s the science of organising tasks, allocating resources, and ensuring deliverables meet specifications. Managers thrive on Gantt charts, risk matrices, and status reports. They’re the architects of order in a world prone to chaos. Leadership, on the other hand, is about inspiration. It’s the art of painting a compelling vision, rallying teams around a shared purpose, and navigating uncertainty with grit. ... A project manager’s IQ might land them the job, but their EQ determines their success. Leadership in project management isn’t just about charisma—it’s about sensing unspoken tensions, motivating burnt-out teams, and navigating stakeholder egos. ... The debate between leadership and management is a false dichotomy. Like yin and yang, they’re interdependent forces. A project manager who only manages becomes a bureaucrat, obsessed with checkboxes but blind to the bigger picture. One who only leads becomes a dreamer, chasing visions without a roadmap. The future belongs to hybrids—those who can rally a team with a compelling vision and deliver a flawless product on deadline.

Daily Tech Digest - July 18, 2025


Quote for the day:

"It is during our darkest moments that we must focus to see the light." -- Aristotle Onassis




Machine unlearning gets a practical privacy upgrade

Machine unlearning, which refers to strategies for removing the influence of specific training data from a model, has emerged to fill the gap. But until now, most approaches have either been slow and costly or fast but lacking formal guarantees. A new framework called Efficient Unlearning with Privacy Guarantees (EUPG) tries to solve both problems at once. Developed by researchers at the Universitat Rovira i Virgili in Catalonia, EUPG offers a practical way to forget data in machine learning models with provable privacy protections and a lower computational cost. Rather than wait for a deletion request and then scramble to rework a model, EUPG starts by preparing the model for unlearning from the beginning. The idea is to first train on a version of the dataset that has been transformed using a formal privacy model, either k-anonymity or differential privacy. This “privacy-protected” model doesn’t memorize individual records, but still captures useful patterns. ... The researchers acknowledge that extending EUPG to large language models and other foundation models will require further work, especially given the scale of the data and the complexity of the architectures involved. They suggest that for such systems, it may be more practical to apply privacy models directly to the model parameters during training, rather than to the data beforehand.


Emerging Cloaking-as-a-Service Offerings are Changing Phishing Landscape

Cloaking-as-a-service offerings – increasingly powered by AI – are “quietly reshaping how phishing and fraud infrastructure operates, even if it hasn’t yet hit mainstream headlines,” SlashNext’s Research Team wrote Thursday. “In recent years, threat actors have begun leveraging the same advanced traffic-filtering tools once used in shady online advertising, using artificial intelligence and clever scripting to hide their malicious payloads from security scanners and show them only to intended victims.” ... The newer cloaking services offer advanced detection evasion techniques, such as JavaScript fingerprinting, device and network profiling, machine learning analysis and dynamic content swapping, and put them into user-friendly platforms that hackers and anyone else can subscribe to, SlashNext researchers wrote. “Cybercriminals are effectively treating their web infrastructure with the same sophistication as their malware or phishing emails, investing in AI-driven traffic filtering to protect their scams,” they wrote. “It’s an arms race where cloaking services help attackers control who sees what online, masking malicious activity and tailoring content per visitor in real time. This increases the effectiveness of phishing sites, fraudulent downloads, affiliate fraud schemes and spam campaigns, which can stay live longer and snare more victims before being detected.”


You’re Not Imagining It: AI Is Already Taking Tech Jobs

It’s difficult to pinpoint the exact motivation behind job cuts at any given company. The overall economic environment could also be a factor, marked by uncertainties heightened by President Donald Trump’s erratic tariff plans. Many companies also became bloated during the pandemic, and recent layoffs could still be trying to correct for overhiring. According to one report released earlier this month by the executive coaching firm Challenger, Gray and Christmas, AI may be more of a scapegoat than a true culprit for layoffs: Of more than 286,000 planned layoffs this year, only 20,000 were related to automation, and of those, only 75 were explicitly attributed to artificial intelligence, the firm found. Plus, it’s challenging to measure productivity gains caused by AI, said Stanford’s Chen, because while not every employee may have AI tools officially at their disposal at work, they do have unauthorized consumer versions that they may be using for their jobs. While the technology is beginning to take a toll on developers in the tech industry, it’s actually “modestly” created more demand for engineers outside of tech, said Chen. That’s because other sectors, like manufacturing, finance, and healthcare, are adopting AI tools for the first time, so they are adding engineers to their ranks in larger numbers than before, according to her research.


The architecture of culture: People strategy in the hospitality industry

Rewards and recognitions are the visible tip of the iceberg, but culture sits below the surface. And if there’s one thing that I’ve learned over the years, it’s that culture only sticks when it’s felt, not just said. Not once a year, but every single day. Hilton’s consistent recognition as a Great Place to Work® globally and in India stems from our unwavering support and commitment to helping people thrive, both personally and professionally. ... What has sustained our culture through this growth is a focus on the everyday. It is not big initiatives alone that shape how people feel at work, but the smaller, consistent actions that build trust over time. Whether it is how a team huddle is run, how feedback is received, or how farewells are handled, we treat each moment as an opportunity to reinforce care and connection. ... Equally vital is cultivating culturally agile, people-first leaders. South Asia’s diversity, across language, faith, generation, and socio-economic background, demands leadership that is both empathetic and inclusive. We’re working to embed this cultural intelligence across the employee journey, from hiring and onboarding to ongoing development and performance conversations, so that every team member feels genuinely seen and supported.


Capturing carbon - Is DAC a perfect match for data centers?

The commercialization of DAC, however, faces several significant challenges. One primary obstacle is navigating different compliance requirements across jurisdictions. Certification standards vary significantly between regions like Canada, the UK, and Europe, necessitating differing approaches in each jurisdiction. However, while requiring adjustments, Chadwick argues that these differences are not insurmountable and are merely part of the scaling process. Beyond regulatory and deployment concerns, achieving cost reductions is a significant challenge. DAC remains highly expensive, costing an average of $680 per ton to produce in 2024, according to Supercritical, a carbon removal marketplace. In comparison, Biochar has an average price of $165 per ton, and enhanced rock weathering has an average price of $310 per ton. In addition, the complexity of DAC means up-front costs are much higher than those of alternative forms of carbon removal. An average DAC unit comprises air-intake manifolds, absorption and desorption towers, liquid-handling tanks, and bespoke site-specific engineering. DAC also requires significant amounts of power to operate. Recent studies have shown that the energy consumption of fans in DAC plants can range from 300 to 900 kWh per ton of CO2 captures, which represents between 20 - 40 percent of total DAC system energy usage.


Rethinking Risk: The Role of Selective Retrieval in Data Lake Strategies

Selective retrieval works because it bridges the gap between data engineering complexity and security usability. It gives teams options without asking them to reinvent the wheel. It also avoids the need to bring in external tools during a breach investigation, which can introduce latency, complexity, or worse, gaps in the chain of custody. What’s compelling about this approach is that it doesn’t require businesses to abandon existing tools or re-architect their infrastructure. ... This model is especially relevant for mid-size IT teams who want to cover their audit requirements, but don’t have a 24/7 security operations center. It’s also useful in regulated sectors such as healthcare, financial services, and manufacturing where data retention isn’t optional, but real-time analysis for everything isn’t practical. ... Data volumes are continuing to rise. As organizations face high costs and fatigue, those that thrive will be the ones that treat storage and retrieval as distinct functions. The ability to preserve signal without incurring ongoing noise costs will become a critical enabler for everything from insider threat detection to regulatory compliance. Selective retrieval isn’t just about saving money. It’s about regaining control over data sprawl, aligning IT resources with actual risk, and giving teams the tools they need to ask, and answer, better questions.


Manufactured Madness: How To Protect Yourself From Insane AIs

The core of the problem lies in a well-intentioned but flawed premise: that we can and should micromanage an AI’s output to prevent any undesirable outcomes. These “guardrails” are complex sets of rules and filters designed to stop the model from generating hateful, biased, dangerous, or factually incorrect information. In theory, this is a laudable goal. In practice, it has created a generation of AIs that prioritize avoiding offense over providing truth. ... Compounding the problem of forced outcomes is a crisis of quality. The data these models are trained on is becoming increasingly polluted. In the early days, models were trained on a vast, curated slice of the pre-AI internet. But now, as AI-generated content inundates every corner of the web, new models are being trained on the output of their predecessors. ... Given this landscape, the burden of intellectual safety now falls squarely on the user. We can no longer afford to treat AI-generated text with passive acceptance. We must become active, critical consumers of its output. Protecting yourself requires a new kind of digital literacy. First and foremost: Trust, but verify. Always. Never take a factual claim from an AI at face value. Whether it’s a historical date, a scientific fact, a legal citation, or a news summary, treat it as an unconfirmed rumor until you have checked it against a primary source.


6 Key Lessons for Businesses that Collect and Use Consumer Data

Ensure your privacy notice properly discloses consumer rights, including the right to access, correct, and delete personal data stored and collected by businesses, and the right to opt-out of the sale of personal data and targeted advertising. Mechanisms for exercising those rights must work properly, with a process in place to ensure a timely response to consumer requests. ... Another issue that the Connecticut AG raised was that the privacy notice was “largely unreadable.” While privacy notices address legal rights and obligations, you should avoid using excessive legal jargon to the extent possible and use clear, simple language to notify consumers about their rights and the mechanisms for exercising those rights. In addition, be as succinct as possible to help consumers locate the information they need to understand and exercise applicable rights. ... The AG provided guidance that under the CTDPA, if a business uses cookie banners to permit a consumer to opt-out of some data processing, such as targeted advertising, the consumer must be provided with a symmetrical choice. In other words, it has to be as clear and as easy for the consumer to opt out of such use of their personal data as it would be to opt in. This includes making the options to accept all cookies and to reject all cookies visible on the screen at the same time and in the same color, font, and size.


How agentic AI Is reshaping execution across BFSI

Several BFSI firms are already deploying agentic models within targeted areas of their operations. The results are visible in micro-interventions that improve process flow and reduce manual load. Autonomous financial advisors, powered by agentic logic, are now capable of not just reacting to user input, but proactively monitoring markets, assessing customer portfolios, and recommending real-time changes.. In parallel, agentic systems are transforming customer service by acting as intelligent finance assistants, guiding users through complex processes such as mortgage applications or claims filing. ... For Agentic AI to succeed, it must be integrated into operational strategy. This begins by identifying workflows where progress depends on repetitive human actions that follow predictable logic. These are often approval chains, verifications, task handoffs, and follow-ups. Once identified, clear rules need to be defined. What conditions trigger an action? When is escalation required? What qualifies as a closed loop? The strength of an agentic system lies in its ability to act with precision, but that depends on well-designed logic and relevant signals. Data access is equally important. Agentic AI systems require context. That means drawing from activity history, behavioural cues, workflow states and timing patterns. 


Open Source Is Too Important To Dilute

The unfortunate truth is that these criteria don’t apply in every use case. We’ve seen vendors build traction with a truly open project. Then, worried about monetization or competition, they relicense it under a “source-available” model with restrictions, like “no commercial use” or “only if you’re not a competitor.” But that’s not how open source works. Software today is deeply interconnected. Every project — no matter how small or isolated — relies on dependencies, which rely on other dependencies, all the way down the chain. A license that restricts one link in that chain can break the whole thing. ... Forks are how the OSS community defends itself. When HashiCorp relicensed Terraform under the Business Source License (BSL) — blocking competitors from building on the tooling — the community launched OpenTofu, a fork under an OSI-approved license, backed by major contributors and vendors. Redis’ transition away from Berkeley Software Distribution (BSD) to a proprietary license was a business decision. But it left a hole — and the community forked it. That fork became Valkey, a continuation of the project stewarded by the people and platforms who relied on it most. ... The open source brand took decades to build. It’s one of the most successful, trusted ideas in software history. But it’s only trustworthy because it means something.

Daily Tech Digest - July 17, 2025


Quote for the day:

"Accept responsibility for your life. Know that it is you who will get you where you want to go, no one else." -- Les Brown


AI That Thinks Like Us: New Model Predicts Human Decisions With Startling Accuracy

“We’ve created a tool that allows us to predict human behavior in any situation described in natural language – like a virtual laboratory,” says Marcel Binz, who is also the study’s lead author. Potential applications range from analyzing classic psychological experiments to simulating individual decision-making processes in clinical contexts – for example, in depression or anxiety disorders. The model opens up new perspectives in health research in particular – for example, by helping us understand how people with different psychological conditions make decisions. ... “We’re just getting started and already seeing enormous potential,” says institute director Eric Schulz. Ensuring that such systems remain transparent and controllable is key, Binz adds – for example, by using open, locally hosted models that safeguard full data sovereignty. ...  The researchers are convinced: “These models have the potential to fundamentally deepen our understanding of human cognition – provided we use them responsibly.” That this research is taking place at Helmholtz Munich rather than in the development departments of major tech companies is no coincidence. “We combine AI research with psychological theory – and with a clear ethical commitment,” says Binz. “In a public research environment, we have the freedom to pursue fundamental cognitive questions that are often not the focus in industry.” 


Collaboration is Key: How to Make Threat Intelligence Work for Your Organization

A challenge with joining threat intelligence sharing communities is that a lot of threat information is generated and needs to be shared daily. For already resource-stretched teams, it can be extra work to pull together, share a threat intelligence report, and filter through the incredible volumes of information. Particularly for smaller organizations, it can be a bit like drinking from a firehose. In this context, an advanced threat intelligence platform (TIP) can be invaluable. A TIP has the capabilities to collect, filter, and prioritize data, helping security teams to cut through the noise and act on threat intelligence faster. TIPs can also enrich the data with additional contexts, such as threat actor TTPs (tactics, techniques and procedures), indicators of compromise (IOCs), and potential impact, making it easier to understand and respond to threats. Furthermore, an advanced TIP can have the capability to automatically generate threat intelligence reports, ready to be securely shared within the organization’s threat intelligence sharing community Secure threat intelligence sharing reduces risk, accelerates response and builds resilience across entire ecosystems. If you’re not already part of a trusted intelligence-sharing community, it is time to join. And if you are, do contribute your own valuable threat information. In cybersecurity, we’re only as strong as our weakest link and our most silent partner.


Google study shows LLMs abandon correct answers under pressure, threatening multi-turn AI systems

The researchers first examined how the visibility of the LLM’s own answer affected its tendency to change its answer. They observed that when the model could see its initial answer, it showed a reduced tendency to switch, compared to when the answer was hidden. This finding points to a specific cognitive bias. As the paper notes, “This effect – the tendency to stick with one’s initial choice to a greater extent when that choice was visible (as opposed to hidden) during the contemplation of final choice – is closely related to a phenomenon described in the study of human decision making, a choice-supportive bias.” ... “This finding demonstrates that the answering LLM appropriately integrates the direction of advice to modulate its change of mind rate,” the researchers write. However, they also discovered that the model is overly sensitive to contrary information and performs too large of a confidence update as a result. ... Fortunately, as the study also shows, we can manipulate an LLM’s memory to mitigate these unwanted biases in ways that are not possible with humans. Developers building multi-turn conversational agents can implement strategies to manage the AI’s context. For example, a long conversation can be periodically summarized, with key facts and decisions presented neutrally and stripped of which agent made which choice.


Building stronger engineering teams with aligned autonomy

Autonomy in the absence of organizational alignment can cause teams to drift in different directions, build redundant or conflicting systems, or optimize for local success at the cost of overall coherence. Large organizations with multiple engineering teams can be especially prone to these kinds of dysfunction. The promise of aligned autonomy is that it resolves this tension. It offers “freedom within a framework,” where engineers understand the why behind their work but have the space to figure out the how. Aligned autonomy builds trust, reduces friction, and accelerates delivery by shifting control from a top-down approach to a shared, mission-driven one. ... For engineering teams, their north star might be tied to business outcomes, such as enabling a frictionless customer onboarding experience, reducing infrastructure costs by 30%, or achieving 99.9% system uptime. ... Autonomy without feedback is a blindfolded sprint, and just as likely to end in disaster. Feedback loops create connections between independent team actions and organizational learning. They allow teams to evaluate whether their decisions are having the intended impact and to course-correct when needed. ... In an aligned autonomy model, teams should have the freedom to choose their own path — as long as everyone’s moving in the same direction. 


How To Build a Software Factory

Of the three components, process automation is likely to present the biggest hurdle. Many organizations are happy to implement continuous integration and stop there, but IT leaders should strive to go further, Reitzig says. One example is automating underlying infrastructure configuration. If developers don’t have to set up testing or production environments before deploying code, they get a lot of time back and don’t need to wait for resources to become available. Another is improving security. Though there’s value in continuous integration automatically checking in, reviewing and integrating code, stopping there can introduce vulnerabilities. “This is a system for moving defects into production faster, because configuration and testing are still done manually,” Reitzig says. “It takes too long, it’s error-prone, and the rework is a tax on productivity.” ... While the software factory standardizes much of the development process, it’s not monolithic. “You need different factories to segregate domains, regulations, geographic regions and the culture of what’s acceptable where,” Yates says. However, even within domains, software can serve vastly different purposes. For instance, human resources might seek to develop applications that approve timesheets or security clearances. Managing many software factories can pose challenges, and organizations would be wise to identify redundancies, Reitzig says. 


Why Scaling Makes Microservices Testing Exponentially Harder

You’ve got clean service boundaries and focused test suites, and each team can move independently. Testing a payment service? Spin up the service, mock the user service and you’re done. Simple. This early success creates a reasonable assumption that testing complexity will scale proportionally with the number of services and developers. After all, if each service can be tested in isolation and you’re growing your engineering team alongside your services, why wouldn’t the testing effort scale linearly? ... Mocking strategies that work beautifully at a small scale become maintenance disasters at a large scale. One API change can require updating dozens of mocks across different codebases, owned by different teams. ... Perhaps the most painful scaling challenge is what happens to shared staging environments. With a few services, staging works reasonably well. Multiple teams can coordinate deployments, and when something breaks, the culprit is usually obvious. But as you add services and teams, staging becomes either a traffic jam or a free-for-all — and both are disastrous. ... The teams that successfully scale microservices testing have figured out how to break this exponential curve. They’ve moved away from trying to duplicate production environments for testing and are instead focused on creating isolated slices of their production-like environment.


India’s Digital Infrastructure Is Going Global. What Kind of Power Is It Building?

India’s digital transformation is often celebrated as a story of frugal innovation. DPI systems have allowed hundreds of millions to access ID, receive payments, and connect to state services. In a country of immense scale and complexity, this is an achievement. But these systems do more than deliver services; they configure how the state sees its citizens: through biometric records, financial transactions, health databases, and algorithmic scoring systems. ... India’s digital infrastructure is not only reshaping domestic governance, but is being actively exported abroad. From vaccine certification platforms in Sri Lanka and the Philippines to biometric identity systems in Ethiopia, elements of India Stack are being adopted across Asia and Africa. The Modular Open Source Identity Platform (MOSIP), developed in Bangalore, is now in use in more than twenty countries. Indeed, India is positioning itself as a provider of public infrastructure for the Global South, offering a postcolonial alternative to both Silicon Valley’s corporate-led ecosystems and China’s surveillance-oriented platforms. ... It would be a mistake to reduce India’s digital governance model to either a triumph of innovation or a tool of authoritarian control. The reality is more of a fragmented and improvisational technopolitics. These platforms operate across a range of sectors and are shaped by diverse actors including bureaucrats, NGOs, software engineers, and civil society activists.


Chris Wright: AI needs model, accelerator, and cloud flexibility

As the model ecosystem has exploded, platform providers face new complexity. Red Hat notes that only a few years ago, there were limited AI models available under open user-friendly licenses. Most access was limited to major cloud platforms offering GPT-like models. Today, the situation has changed dramatically. “There’s a pretty good set of models that are either open source or have licenses that make them usable by users”, Wright explains. But supporting such diversity introduces engineering challenges. Different models require different model customization and inference optimizations, and platforms must balance performance with flexibility. ... The new inference capabilities, delivered with the launch of Red Hat AI Inference Server, enhance Red Hat’s broader AI vision. This spans multiple offerings: Red Hat OpenShift AI, Red Hat Enterprise Linux AI, and the aforementioned Red Hat AI Inference Server under the Red Hat AI umbrella. Along the are embedded AI capabilities across Red Hat’s hybrid cloud offerings with Red Hat Lightspeed. These are not simply single products but a portfolio that Red Hat can evolve based on customer and market demands. This modular approach allows enterprises to build, deploy, and maintain models based on their unique use case, across their infrastructure. This from edge deployments to centralized cloud inference, while maintaining consistency in management and operations.


Data Protection vs. Cyber Resilience: Mastering Both in a Complex IT Landscape

Traditional disaster recovery (DR) approaches designed for catastrophic events and natural disasters are still necessary today, but companies must implement a more security-event-oriented approach on top of that. Legacy approaches to disaster recovery are insufficient in an environment that is rife with cyberthreats as these approaches focus on infrastructure, neglecting application-level dependencies and validation processes. Further, threat actors have moved beyond interrupting services and now target data to poison, encrypt or exfiltrate it. As such, cyber resilience needs more than a focus on recovery. It requires the ability to recover with data integrity intact and prevent the same vulnerabilities that caused the incident in the first place. ... Failover plans, which are common in disaster recovery, focus on restarting Virtual Machines (VMs) sequentially but lack comprehensive validation. Application-centric recovery runbooks, however, provide a step-by-step approach to help teams manage and operate technology infrastructure, applications and services. This is key to validating whether each service, dataset and dependency works correctly in a staged and sequenced approach. This is essential as businesses typically rely on numerous critical applications, requiring a more detailed and validated recovery process.


Rethinking Distributed Computing for the AI Era

The problem becomes acute when we examine memory access patterns. Traditional distributed computing assumes computation can be co-located with data, minimizing network traffic—a principle that has guided system design since the early days of cluster computing. But transformer architectures require frequent synchronization of gradient updates across massive parameter spaces—sometimes hundreds of billions of parameters. The resulting communication overhead can dominate total training time, explaining why adding more GPUs often yields diminishing returns rather than the linear scaling expected from well-designed distributed systems. ... The most promising approaches involve cross-layer optimization, which traditional systems avoid when maintaining abstraction boundaries. For instance, modern GPUs support mixed-precision computation, but distributed systems rarely exploit this capability intelligently. Gradient updates might not require the same precision as forward passes, suggesting opportunities for precision-aware communication protocols that could reduce bandwidth requirements by 50% or more. ... These architectures often have non-uniform memory hierarchies and specialized interconnects that don’t map cleanly onto traditional distributed computing abstractions. 

Daily Tech Digest - July 16, 2025


Quote for the day:

"Whatever the mind of man can conceive and believe, it can achieve." -- Napoleon Hill


The Seventh Wave: How AI Will Change the Technology Industry

AI presents three threats to the software industry: Cheap code: TuringBots, using generative AI to create software, threatens the low-code/no-code players. Cheap replacement: Software systems, be they CRM or ERP, are structured databases – repositories for client records or financial records. Generative AI, coupled with agentic AI, holds out the promise of a new way to manage this data, opening the door to an enterprising generation of tech companies that will offer AI CRM, AI financials, AI database, AI logistics, etc. ... Better functionality: AI-native systems will continually learn and flex and adapt without millions of dollars of consulting and customization. They hold the promise of being up to date and always ready to take on new business problems and challenges without rebuilds. When the business and process changes, the tech will learn and change. ... On one hand, the legacy software systems that PwC, Deloitte, and others have implemented for decades and that comprise much of their expertise will be challenged in the short term and shrink in the long term. Simultaneously, there will be a massive demand for expertise in AI. Cognizant, Capgemini, and others will be called on to help companies implement AI computing systems and migrate away from legacy vendors. Forrester believes that the tech services sector will grow by 3.6% in 2025.


Software Security Imperative: Forging a Unified Standard of Care

The debate surrounding liability in the open source ecosystem requires careful consideration. Imposing direct liability on individual open source maintainers could stifle the very innovation that drives the industry forward. It risks dismantling the vast ecosystem that countless developers rely upon. ... The software bill of materials (SBOM) is rapidly transitioning from a nascent concept to an undeniable business necessity. As regulatory pressures intensify, driven by a growing awareness of software supply chain risks, a robust SBOM strategy is becoming critical for organizational survival in the tech landscape. But the value of SBOMs extends far beyond a single software development project. While often considered for open source software, an SBOM provides visibility across the entire software ecosystem. It illuminates components from third-party commercial software, helps manage data across merged projects and validates code from external contributors or subcontractors — any code integrated into a larger system. ... The path to a secure digital future requires commitment from all stakeholders. Technology companies must adopt comprehensive security practices, regulators must craft thoughtful policies that encourage innovation while holding organizations accountable and the broader ecosystem must support the collaborative development of practical and effective standards.


The 4 Types of Project Managers

The prophet type is all about taking risks and pushing boundaries. They don’t play by the rules; they make their own. And they’re not just thinking outside the box, they’re throwing the box away altogether. It’s like a rebel without a cause, except this rebel has a cause – growth. These visionaries thrive in ambiguity and uncertainty, seeing potential where others see only chaos or impossibility. They often face resistance from more conservative team members who prefer predictable outcomes and established processes. ... The gambler type is all about taking chances and making big bets. They’re not afraid to roll the dice and see what happens. And while they play by the rules of the game, they don’t have a good business case to back up their bets. It’s like convincing your boss to let you play video games all day because you just have a hunch it will improve your productivity. But don’t worry, the gambler type isn’t just blindly throwing money around. They seek to engage other members of the organization who are also up for a little risk-taking. ... The expert type is all about challenging the existing strategy by pursuing growth opportunities that lie outside the current strategy, but are backed up by solid quantitative evidence. They’re like the detectives of the business world, following the clues and gathering the evidence to make their case. And while the growth opportunities are well-supported and should be feasible, the challenge is getting other organizational members to listen to their advice.


OpenAI, Google DeepMind and Anthropic sound alarm: ‘We may be losing the ability to understand AI’

The unusual cooperation comes as AI systems develop new abilities to “think out loud” in human language before answering questions. This creates an opportunity to peek inside their decision-making processes and catch harmful intentions before they turn into actions. But the researchers warn this transparency is fragile and could vanish as AI technology advances. ... “AI systems that ‘think’ in human language offer a unique opportunity for AI safety: we can monitor their chains of thought for the intent to misbehave,” the researchers explain. But they emphasize that this monitoring capability “may be fragile” and could disappear through various technological developments. ... When AI models misbehave — exploiting training flaws, manipulating data, or falling victim to attacks — they often confess in their reasoning traces. The researchers found examples where models wrote phrases like “Let’s hack,” “Let’s sabotage,” or “I’m transferring money because the website instructed me to” in their internal thoughts. Jakub Pachocki, OpenAI’s chief technology officer and co-author of the paper, described the importance of this capability in a social media post. “I am extremely excited about the potential of chain-of-thought faithfulness & interpretability. It has significantly influenced the design of our reasoning models, starting with o1-preview,” he wrote.


Unmasking AsyncRAT: Navigating the labyrinth of forks

We believe that the groundwork for AsyncRAT was laid earlier by the Quasar RAT, which has been available on GitHub since 2015 and features a similar approach. Both are written in C#; however, their codebases differ fundamentally, suggesting that AsyncRAT was not just a mere fork of Quasar, but a complete rewrite. A fork, in this context, is a personal copy of someone else’s repository that one can freely modify without affecting the original project. The main link that ties them together lies in the custom cryptography classes used to decrypt the malware configuration settings. ... Ever since it was released to the public, AsyncRAT has spawned a multitude of new forks that have built upon its foundation. ... It’s also worth noting that DcRat’s plugin base builds upon AsyncRAT and further extends its functionality. Among the added plugins are capabilities such as webcam access, microphone recording, Discord token theft, and “fun stuff”, a collection of plugins used for joke purposes like opening and closing the CD tray, blocking keyboard and mouse input, moving the mouse, turning off the monitor, etc. Notably, DcRat also introduces a simple ransomware plugin that uses the AES-256 cipher to encrypt files, with the decryption key distributed only once the plugin has been requested.


Repatriating AI workloads? A hefty data center retrofit awaits

CIOs with in-house AI ambitions need to consider compute and networking, in addition to power and cooling, Thompson says. “As artificial intelligence moves from the lab to production, many organizations are discovering that their legacy data centers simply aren’t built to support the intensity of modern AI workloads,” he says. “Upgrading these facilities requires far more than installing a few GPUs.” Rack density is a major consideration, Thompson adds. Traditional data centers were designed around racks consuming 5 to 10 kilowatts, but AI workloads, particularly model training, push this to 50 to 100 kilowatts per rack. “Legacy facilities often lack the electrical backbone, cooling systems, and structural readiness to accommodate this jump,” he says. “As a result, many CIOs are facing a fork in the road: retrofit, rebuild, or rent.” Cooling is also an important piece of the puzzle because not only does it enable AI, but upgrades there can help pay for other upgrades, Thompson says. “By replacing inefficient air-based systems with modern liquid-cooled infrastructure, operators can reduce parasitic energy loads and improve power usage effectiveness,” he says. “This frees up electrical capacity for productive compute use — effectively allowing more business value to be generated per watt. For facilities nearing capacity, this can delay or eliminate the need for expensive utility upgrades or even new construction.”


Burnout, budgets and breaches – how can CISOs keep up?

As ever, collaboration in a crisis is critical. Security teams working closely with backup, resilience and recovery functions are better able to absorb shocks. When the business is confident in its ability to restore operations, security professionals face less pressure and uncertainty. This is also true for communication, especially post-breach. Organisations need to be transparent about how they’re containing the incident and what’s being done to prevent recurrence. ... There is also an element of the blame game going on, with everyone keen to avoid responsibility for an inevitable cyber breach. It’s much easier to point fingers at the IT team than to look at the wider implications or causes of a cyber-attack. Even something as simple as a phishing email can cause widespread problems and is something that individual employees must be aware of. ... To build and retain a capable cybersecurity team amid the widening skills gap, CISOs must lead a shift in both mindset and strategy. By embedding resilience into the core of cyber strategy, CISOs can reduce the relentless pressure to be perfect and create a healthier, more sustainable working environment. But resilience isn’t built in isolation. To truly address burnout and retention, CISOs need C-suite support and cultural change. Cybersecurity must be treated as a shared business-critical priority, not just an IT function. 


We Spend a Lot of Time Thinking Through the Worst - The Christ Hospital Health Network CISO

“We’ve spent a lot of time meeting with our business partners and talking through, ‘Hey, how would this specific part of the organization be able to run if this scenario happened?’” On top of internal preparations, Kobren shares that his team monitors incidents across the industry to draw lessons from real-world events. Given the unique threat landscape, he states, “We do spend a lot of time thinking through those scenarios because we know it’s one of the most attacked industries.” Moving forward, Kobren says that healthcare consistently ranks at the top when it comes to industries frequently targeted by cyberattacks. He elaborates that attackers have recognized the high impact of disrupting hospital services, making ransom demands more effective because organizations are desperate to restore operations. ... To strengthen identity security, Kobren follows a strong, centralized approach to access control. He mentions that the organization aims to manage “all access to all systems,” including remote and cloud-based applications. By integrating services with single sign-on (SSO), the team ensures control over user credentials: “We know that we are in control of your username and password.” This allows them to enforce password complexity, reset credentials when needed, and block accounts if security is compromised. Ultimately, Kobren states, “We want to be in control of as much of that process as possible” when it comes to identity management.


AI requires mature choices from companies

According to Felipe Chies of AWS, elasticity is the key to a successful AI infrastructure. “If you look at how organizations set up their systems, you see that the computing time when using an LLM can vary greatly. This is because the model has to break down the task and reason logically before it can provide an answer. It’s almost impossible to predict this computing time in advance,” says Chies. This requires an infrastructure that can handle this unpredictability: one that is quickly scalable, flexible, and doesn’t involve long waits for new hardware. Nowadays, you can’t afford to wait months for new GPUs, says Chies. The reverse is also important: being able to scale back. ... Ruud Zwakenberg of Red Hat also emphasizes that flexibility is essential in a world that is constantly changing. “We cannot predict the future,” he says. “What we do know for sure is that the world will be completely different in ten years. At the same time, nothing fundamental will change; it’s a paradox we’ve been seeing for a hundred years.” For Zwakenberg, it’s therefore all about keeping options open and being able to anticipate and respond to unexpected developments. According to Zwakenberg, this requires an infrastructural basis that is not rigid, but offers room for curiosity and innovation. You shouldn’t be afraid of surprises. Embrace surprises, Zwakenberg explains. 


Prompt-Based DevOps and the Reimagined Terminal

New AI-driven CLI tools prove there's demand for something more intelligent in the command line, but most are limited — they're single-purpose apps tied to individual model providers instead of full environments. They are geared towards code generation, not infrastructure and production work. They hint at what's possible, but don't deliver the deeper integration AI-assisted development needs. That's not a flaw, it's an opportunity to rethink the terminal entirely. The terminal's core strengths — its imperative input and time-based log of actions — make it the perfect place to run not just commands, but launch agents. By evolving the terminal to accept natural language input, be more system-aware, and provide interactive feedback, we can boost productivity without sacrificing the control engineers rely on. ... With prompt-driven workflows, they don't have to switch between dashboards or copy-paste scripts from wikis because they simply describe what they want done, and an agent takes care of the rest. And because this is taking place in the terminal, the agent can use any CLI to gather and analyze information from across data sources. The result? Faster execution, more consistent results, and fewer mistakes. That doesn't mean engineers are sidelined. Instead, they're overseeing more projects at once. Their role shifts from doing every step to supervising workflows — monitoring agents, reviewing outputs, and stepping in when human judgment is needed.

Daily Tech Digest - July 15, 2025


Quote for the day:

“Rarely have I seen a situation where doing less than the other guy is a good strategy.” -- Jimmy Spithill


CyberArk: Rise in Machine Identities Poses New Risks

The CyberArk report outlines the substantial business consequences of failing to protect machine identities, leaving organizations vulnerable to costly outages and breaches. Seventy-two percent of organizations experienced at least one certificate-related outage over the past year - a sharp increase compared to prior years. Additionally, 50% reported security incidents or breaches stemming from compromised machine identities. Companies that have experienced non-human identity security breaches include xAI, Uber, Schneider Electric, Cloudflare and BeyondTrust, among others. "Machine identities of all kinds will continue to skyrocket over the next year, bringing not only greater complexity but also increased risks," said Kurt Sand, general manager of machine identity security at CyberArk. "Cybercriminals are increasingly targeting machine identities - from API keys to code-signing certificates - to exploit vulnerabilities, compromise systems and disrupt critical infrastructure, leaving even the most advanced businesses dangerously exposed." ... Fifty percent of security leaders reported security incidents or breaches linked to compromised machine identities in the previous year. These incidents led to delays in application launches for 51% companies, customer-impacting outages for 44% and unauthorized access to sensitive systems for 43%.


What Can Businesses Do About Ethical Dilemmas Posed by AI?

Digital discrimination is a product of bias incorporated into the AI algorithms and deployed at various levels of development and deployment. The biases mainly result from the data used to train the large language models (LLMs). If the data reflects previous iniquities or underrepresents certain social groups, the algorithm has the potential to learn and perpetuate those iniquities. Biases may occasionally culminate in contextual abuse when an algorithm is used beyond the environment or audience for which it was intended or trained. Such a mismatch may result in poor predictions, misclassifications, or unfair treatment of particular groups. Lack of monitoring and transparency merely adds to the problem. In the absence of oversight, biased results are not discovered. ... Human-in-the-loop systems allow intervention in real time whenever AI acts unjustly or unexpectedly, thus minimizing potential harm and reinforcing trust. Human judgment makes choices more inclusive and socially sensitive by including cultural, emotional, or situational elements, which AI lacks. When humans remain in the loop of decision-making, accountability is shared and traceable. This removes ethical blind spots and holds users accountable for consequences.


Beyond the hype: AI disruption in India’s legal practice

The competitive dynamics are stark. When AI can complete a ten-hour task in two hours, firms face a pricing paradox: how to maintain profitability while passing efficiency gains to the clients? Traditional hourly billing models become unsustainable when the underlying time economics change dramatically. ... Effective AI integration hinges on a strong technological foundation, encompassing secure data architecture, advanced cybersecurity measures and a seamless and hassle-free interoperability between systems and already existing platforms. SAM’s centralised Harvey AI approach and CAM’s multi-tool strategy both imply significant investment in these backend capabilities. ... Merely automating existing workflows fails to leverage AI’s transformative potential. To unlock AI’s full transformative value, firms must rethink their legal processes – streamlining tasks, reallocating human resources to higher order functions and embedding AI at the core of decision-making processes and document production cycles. ... AI enables alternative service models that go beyond the billable hour. Firms that rethink on how they can price say, by offering subscription-based or outcome-driven services, and position themselves as strategic partners rather than task executors, will be best positioned to capture long-term client value in an AI-first legal economy.


‘Chronodebt’: The lose/lose situation few CIOs can escape

One needn’t be an expert in the field of technical architecture to know that basing a capability as essential as air traffic control on such obviously obsolete technology is a bad idea. Someone should lose their job over this. And yet, nobody has lost their job over this, nor should they have. That’s because the root cause of the FAA’s woes — poor chronodebt management, in case you haven’t been paying attention — is a discipline that’s rarely tracked by reliable metrics and almost-as-rarely budgeted for. Metrics first: While the discipline of IT project estimation is far from reliable, it’s good enough to be useful in estimating chronodebt’s remediation costs — in the FAA’s case what it would have to spend to fix or replace its integrations and the integration platforms on which those integrations rely. That’s good enough, with no need for precision. Those running the FAA for all these years could, that is, estimate the cost of replacing the programs used to export and update its repositories, and replacing the 3 ½” diskettes and paper strips on which they rely. But, telling you what you already know, good business decisions are based not just on estimated costs, but on benefits netted against those costs. The problem with chronodebt is that there are no clear and obvious ways to quantify the benefits to be had by reducing it.


Can System Initiative fix devops?

System Initiative turns traditional devops on its head. It translates what would normally be infrastructure configuration code into data, creating digital twins that model the infrastructure. Actions like restarting servers or running complex deployments are expressed as functions, then chained together in a dynamic, graphical UI. A living diagram of your infrastructure refreshes with your changes. Digital twins allow the system to automatically infer workflows and changes of state. “We’re modeling the world as it is,” says Jacob. For example, when you connect a Docker container to a new Amazon Elastic Container Service instance, System Initiative recognizes the relationship and updates the model accordingly. Developers can turn workflows — like deploying a container on AWS — into reusable models with just a few clicks, improving speed. The GUI-driven platform auto-generates API calls to cloud infrastructure under the hood. ... An abstraction like System Initiative could embrace this flexibility while bringing uniformity to how infrastructure is modeled and operated across clouds. The multicloud implications are especially intriguing, given the rise in adoption of multiple clouds and the scarcity of strong cross-cloud management tools. A visual model of the environment makes it easier for devops teams to collaborate based on a shared understanding, says Jacob — removing bottlenecks, speeding feedback loops, and accelerating time to value.


An exodus evolves: The new digital infrastructure market

Regulatory pressures have crystallised around concerns over reliance on a small number of US-based cloud providers. With some hyperscalers openly admitting that they cannot guarantee data stays within a jurisdiction during transfer, other types of infrastructure make it easier to maintain compliance with UK and EU regulations. This is a clear strategy to avoid future financial and reputational damage. ... 2025 is a pivotal year for digital infrastructure. Public cloud will remain an essential part of the IT landscape. But the future of data strategy lies in making informed, strategic decisions, leveraging the right mix of infrastructure solutions for specific workloads and business needs. As part of our research, we assessed the shape of this hybrid market. ... With one eye to the future, UK-based cloud providers must be positioned as a strategic advantage, offering benefits such as data sovereignty, regulatory compliance, and reduced latency. Businesses will need to situate themselves ever more precisely on the spectrum of digital infrastructure. Their location will reflect how they embrace a hybrid model that balances public cloud, private cloud, colocation and on-premise options. This approach will not only optimise performance and costs but also provide long-term resilience in an evolving digital economy.


How Trump's Cyber Cuts Dismantle Federal Information Sharing

"The budget cuts, personnel reductions and other policy changes have decreased the volume and frequency of CISA's information sharing activities in both formal and informal channels," Daniel told ISMG. While sector-specific ISACs still share information, threat sharing efforts tied to federal funding - such as the Multi-State ISAC, which supports state and local governments - "have been negatively affected," he said . One former CISA staffer who recently accepted the administration's deferred resignation offer told ISMG the agency's information-sharing efforts "were among the first to take a hit" from the administration's cuts, with many feeling pressured into silence. ... Analysts have also warned that cuts to cyber staff across federal agencies and risks to initiatives including the National Vulnerability Database and Common Vulnerabilities and Exposures program could harm cybersecurity far beyond U.S. borders. The CVE program is dealing with backlogs and a recent threat to shut down funding over a federal contracting issue. Failure of the CVE Program "would have wide impacts on vulnerability management efficiency and effectiveness globally," said John Banghart, senior director for cybersecurity services at Venable and a key architect of the Obama administration's cybersecurity policy as a former director for federal cybersecurity for the National Security Council.


Securing vehicles as they become platforms for code and data

Recently security researchers have demonstrated real-world attacks against connected cars, such as wireless brake manipulation on heavy trucks by spoofing J-bus diagnostic packets. Another very recent example is successful attacks against autonomous car LIDAR systems. While the distribution of EV and advanced cars becomes more pervasive across our society, we expect these types of attacks and methods to continue to grow in complexity. Which makes a continuous, real-time approach to securing the entire ecosystem (from charger, to car, to driver) even more so important. ... Over-the-air (OTA) update hijacking is very real and often enabled by poor security design, such as lack of encryption, improper authentication between the car and backend, and lack of integrity or checksum validation. Attack vectors that the traditional computer industry has dealt with for years are now becoming a harsh reality in the automotive sector. Luckily, many of the same approaches used to mitigate these risks in IT can also apply here ... When we look at just the automobile, we have a variety of connected systems which typically all come from different manufacturers (Android Automotive, or QNX as examples) which increases the potential for supply chain abuse. We also have devices which the driver introduces which interacts with the car’s APIs creating new entry points for attackers.


Strategizing with AI: How leaders can upgrade strategic planning with multi-agent platforms

Building resiliency and optionality into a strategic plan challenges humans’ cognitive (and financial) bandwidth. The seemingly endless array of future scenarios, coupled with our own human biases, conspires to anchor our understanding of the future in what we’ve seen in the past. Generative AI (GenAI) can help overcome this common organizational tendency for entrenched thinking, and mitigate the challenges of being human, while exploiting LLMs’ creativity as well as their ability to mirror human behavioral patterns. ... In fact, our argument reflects our own experience using a multi-agent LLM simulation platform built by the BCG Henderson Institute. We’ve used this platform to mirror actual war games and scenario planning sessions we’ve led with clients in the past. As we’ve seen firsthand, what makes an LLM multi-agent simulation so powerful is the possibility of exploiting two unique features of GenAI—its anthropomorphism, or ability to mimic human behavior, and its stochasticity, or creativity. LLMs can role-play in remarkably human-like fashion: Research by Stanford and Google published earlier this year suggests that LLMs are able to simulate individual personalities closely enough to respond to certain types of surveys with 85% accuracy as the individuals themselves.


The Network Challenges of IoT Integration

IoT interoperability and compatible security protocols are a particular challenge. Although NIST and ISO, among other organizations, have issued IoT standards, smaller IoT manufacturers don't always have the resources to follow their guidance. This becomes a network problem because companies have to retool these IoT devices before they can be used on their enterprise networks. Moreover, because many IoT gadgets are delivered with default security settings that are easy to undo, each device has to be hand-configured to ensure it meets company security standards. To avoid potential interoperability pitfalls, network staff should evaluate prospective technology before anything is purchased. ... First, to achieve high QoS, every data pipeline on the network must be analyzed -- as well as every single system, application and network device. Once assessed, each component must be hand-calibrated to run at the highest performance levels possible. This is a detailed and specialized job. Most network staff don't have trained QoS technicians on board, so they must go externally for help. Second, which areas of the business get maximum QoS, and which don't? A medical clinic, for example, requires high QoS to support a telehealth application where doctors and patients communicate. 

Daily Tech Digest - July 12, 2025


Quote for the day:

"If you do what you’ve always done, you’ll get what you’ve always gotten." -- Tony Robbins


Why the Value of CVE Mitigation Outweighs the Costs

When it comes to CVEs and continuous monitoring, meeting compliance requirements can be daunting and confusing. Compliance isn’t just achieved; rather, it is a continuous maintenance process. Compliance frameworks might require additional standards, such as Federal Information Processing Standards (FIPS), Federal Risk and Authorization Management Program (FedRAMP), Security Technical Implementation Guides (STIGs) and more that add an extra layer of complexity and time spent. The findings are clear. Telecommunications and infrastructure companies reported an average of $3 million in new revenue annually by improving their container security enough to qualify for security-sensitive contracts. Healthcare organizations averaged $7.3 million in new revenue, often driven by unlocking expansion into compliance-heavy markets. ... The industry has long championed “shifting security left,” or embedding checks earlier in the pipeline to ensure security measures are incorporated throughout the entire software development life cycle. However, as CVE fatigue worsens, many teams are realizing they need to “start left.” That means: Using hardened, minimal container images by default; Automating CVE triage and patching through reproducible builds; Investing in secure-by-default infrastructure that makes vulnerability management invisible to most developers


Generative AI: A Self-Study Roadmap

Building generative AI applications requires comfort with Python programming and basic machine learning concepts, but you don't need deep expertise in neural network architecture or advanced mathematics. Most generative AI work happens at the application layer, using APIs and frameworks rather than implementing algorithms from scratch. ... Modern generative AI development centers around foundation models accessed through APIs. This API-first approach offers several advantages: you get access to cutting-edge capabilities without managing infrastructure, you can experiment with different models quickly, and you can focus on application logic rather than model implementation. ... Generative AI applications require different API design patterns than traditional web services. Streaming responses improve user experience for long-form generation, allowing users to see content as it's generated. Async processing handles variable generation times without blocking other operations. ... While foundation models provide impressive capabilities out of the box, some applications benefit from customization to specific domains or tasks. Consider fine-tuning when you have high-quality, domain-specific data that foundation models don't handle well—specialized technical writing, industry-specific terminology, or unique output formats requiring consistent structure.


Announcing GenAI Processors: Build powerful and flexible Gemini applications

At its core, GenAI Processors treat all input and output as asynchronous streams of ProcessorParts (i.e. two-way aka bidirectional streaming). Think of it as standardized data parts (e.g., a chunk of audio, a text transcription, an image frame) flowing through your pipeline along with associated metadata. This stream-based API allows for seamless chaining and composition of different operations, from low-level data manipulation to high-level model calls. ... We anticipate a growing need for proactive LLM applications where responsiveness is critical. Even for non-streaming use cases, processing data as soon as it is available can significantly reduce latency and time to first token (TTFT), which is essential for building a good user experience. While many LLM APIs prioritize synchronous, simplified interfaces, GenAI Processors – by leveraging native Python features – offer a way for writing responsive applications without making code more complex. ... GenAI Processors is currently in its early stages, and we believe it provides a solid foundation for tackling complex workflow and orchestration challenges in AI applications. While the Google GenAI SDK is available in multiple languages, GenAI Processors currently only support Python.


Scaling the 21st-century leadership factory

Identifying priority traits is critical; just as important, CEOs and their leadership teams must engage early and often with high-potential employees and unconventional thinkers in the organization, recognizing that innovation often comes from the edges of the business. Skip-level meetings are a powerful tool for this purpose. Most famously, Apple’s Steve Jobs would gather what he deemed the 100 most influential people at the company, including young engineers, to engage directly in strategy discussions—regardless of hierarchy or seniority. ... A culture of experimentation and learning is essential for leadership development—but it must be actively pursued. “Instillation of personal initiative, aggressiveness, and risk-taking doesn’t spring forward spontaneously,” General Jim Mattis explained in his 2019 book on leadership, Call Sign Chaos. “It must be cultivated for years and inculcated, even rewarded, in an organization’s culture. If the risk-takers are punished, then you will retain in your ranks only the risk averse,” he wrote. ... There are multiple ways to streamline decision-making, including redefining decision rights to focus on a handful of owners and distinguishing between different types of decisions, as not all choices are high stakes. 


Lessons learned from Siemens’ VMware licensing dispute

Siemens threatened to sue VMware if it didn’t provide ongoing support for the software and handed over a list of the software it was using that it wanted support for. Except that the list included software that it didn’t have any licenses for, perpetual or otherwise. Broadcom-owned VMware sued, Siemens countersued, and now the companies are battling over jurisdiction. Siemens wants the case to be heard in Germany, and VMware prefers the United States. Normally, if unlicensed copies of software are discovered during an audit, the customer pays the difference and maybe an additional penalty. After all, there are always minor mistakes. The vendors try to keep these costs at least somewhat reasonable, since at some point, customers will migrate from mission-critical software if the pain is high enough. ... For large companies, it can be hard to pivot quickly. Using open-source software can help reduce the risk of unexpected license changes, and, for many major tools there are third-party service providers that can offer ongoing support. Another option is SaaS software, Ringdahl says, because it does make license management a bit easier, since there’s usually transparency both for the customer and the vendor about how much usage the product is getting.


Microsoft says regulations and environmental issues are cramping its Euro expansion

One of the things that everyone needs to consider is how datacenter development in Europe is being enabled or impeded, Walsh said. "Because we have moratoriums coming at us. We have communities that don't want us there," she claimed, referring particularly to Ireland where local opposition to bit barns has been hardening because of the amount of electricity they consume and their environmental impact. Another area of discussion at the Datacloud keynote was the commercial models for acquiring datacenter capacity, which it was felt had become unfit for the new environment where large amounts are needed quickly. "From our perspective, time to market is essential. We've done a lot of leasing in the last two years, and that is all time for market pressure," Walsh said. "I also manage land acquisition and land development, which includes permitting. So the joy of doing that is that when my permits are late, I can lease so I can actually solve my own problems, which is amazing, but the way things are going, it's going to be very difficult to continue to lease the infrastructure using co-location style funding. It's just getting too big, and it's going to get harder and harder to get up the chain, for sure," she explained. ... "European regulations and planning are very slow, and things take 18 months longer than anywhere else," she told attendees at <>Bisnow's Datacenter Investment Conference and Expo (DICE) in Ireland.


350M Cars, 1B Devices Exposed to 1-Click Bluetooth RCE

The scope of affected systems is massive. The developer, OpenSynergy, proudly boasts on its homepage that Blue SDK — and RapidLaunch SDK, which is built on top of it and therefore also possibly vulnerable — has been shipped in 350 million cars. Those cars come from companies like Mercedes-Benz, Volkswagen, and Skoda, as well as a fourth known but unnamed company. Since Ford integrated Blue SDK into its Android-based in-vehicle infotainment (IVI) systems in November, Dark Reading has reached out to determine whether it too was exposed. ... Like any Bluetooth hack, the one major hurdle in actually exploiting these vulnerabilities is physical proximity. An attacker would likely have to position themselves within around 10 meters of a target device in order to pair with it, and the device would have to comply. Because Blue SDK is merely a framework, different devices might block pairing, limit the number of pairing requests an attacker could attempt, or at least require a click to accept a pairing. This is a point of contention between the researchers and Volkswagen. ... "Usually, in modern cars, an infotainment system can be turned on without activating the ignition. For example, in the Volkswagen ID.4 and Skoda Superb, it's not necessary," he says, though the case may vary vehicle to vehicle. 


Leaders will soon be managing AI agents – these are the skills they'll need, according to experts

An AI agent is essentially just "a piece of code", says Jarah Euston, CEO and Co-Founder of AI-powered labour platform WorkWhile, which connects frontline workers to shifts. "It may not have the same understanding, empathy, awareness of the politics of your organization, of the fears or concerns or ambitions of the people around that it is serving. "So managers have to be aware that the agent is only as good as how you've trained it. I don't think we're close yet to having agents that can operate without any human oversight. "As a manager, you want to leverage the AI to make you and your team more productive, but you constantly have to be checking, iterating and training your tools to get the most out of them."  ... Technological skills are expected to become increasingly vital over the next five years, outpacing the growth of all other skill categories. Leading the way are AI and big data, followed closely by networking, cybersecurity and overall technological literacy. The so-called 'soft skills' of creative thinking and resilience, flexibility and agility are also rising in importance, along with curiosity and lifelong learning. Empathy is one skill AI agents can't learn, says Women in Tech's Moore Aoki, and she believes this will advantage women.


Common Master Data Management (MDM) Pitfalls

In addition to failing to connect MDM’s value with business outcomes, “People start with MDM by jumping in with the technology,” Cooper said. “Then, they try to fit the people, processes, and master data into their selected technology.” Moreover, in the process of prioritizing technology first, organizations take for granted that they have good data quality, data that is clean and fit for purpose. Then, during a major initiative, such as migrating to a cloud environment, they discover their data is not so clean. ... Organizations fall into the pitfalls above and others because they try to do it alone, and most have never done MDM before. Instead, “Organizations have different capabilities with MDM,” said Cooper, “and you don’t know what you don’t know.” ... Connecting the MDM program to business objectives requires talking with the stakeholders across the organization, especially divisions with direct financial risks such as sales, marketing, procurement, and supply. Cooper said readers should learn the goals of each unit and how they measure success in growing revenue, reducing cost, mitigating risk, or operating more efficiently. ... Cooper advised focusing on data quality – e.g., through reference data – rather than technology. In the figure below, a company has data about a client, Emerson Electric, as shown on the left. 


Why Cloud Native Security Is More Complex Than You Think

Enterprise security tooling can help with more than just the monitoring of these vulnerabilities though. And, often older vulnerabilities that have been patched by the software vendor will offer “fix status” advice. This is where a specific package version is shown to the developer or analyst responsible for remediating the vulnerability. When they upgrade the current package to that later version, the vulnerability alert will be resolved. To confuse things further, the applications running in containers or serverless functions also need to be checked for non-compliance. Warnings that may be presented by security tooling when these applications are checked against recognised compliance standards, frameworks or benchmarks for noncompliance are wide and varied. For example, if a serverless function has overly permissive access to another cloud service and an attacker gets access to the serverless function’s code via a vulnerability, the attack’s blast radius could exponentially increase as a result. Or, often compliance checks reveal how containers are run with inappropriate network settings. ... At a high level, these components and importantly, how they interact with each other, is why applications running in the cloud require time, effort and specialist expertise to secure them.