Showing posts with label openAI. Show all posts
Showing posts with label openAI. Show all posts

Daily Tech Digest - January 22, 2026


Quote for the day:

"Lost money can be found. Lost time is lost forever. Protect what matters most." -- @ValaAfshar



PTP is the New NTP: How Data Centers Are Achieving Real-Time Precision

Precision Time Protocol (PTP) – an approach that is more complex to implement but worth the extra effort, enabling a whole new level of timing synchronization accuracy. ... Keeping network time in sync is important on any network. But it’s especially critical in data centers, which are typically home to large numbers of network-connected devices, and where small inconsistencies in network timing could snowball into major network synchronization problems. ... NTP works very well in situations where networks can tolerate timing inconsistencies of up to a few milliseconds (meaning thousandths of a second). But beyond this, NTP-based time syncing is less reliable due to limitations ... Unlike NTP, PTP doesn’t rely solely on a server-client model for syncing time across networked devices. Instead, it uses time servers in conjunction with a method called hardware timestamping on client devices. Hardware timestamping involves specialized hardware components, usually embedded in network interface cards (NICs), to track time. Central time servers still exist under PTP. But rather than having software on servers connect to the time servers, hardware devices optimized for the task do this work. The devices also include built-in clocks, allowing them to record time data faster than they could if they had to forward it to the generic clock on a server.


Why AI adoption requires a dedicated approach to cyber governance

Today enterprises are facing unprecedented internal pressure to adopt AI tools at speed. Business units are demanding AI solutions to remain competitive, drive efficiency, and innovate faster. But existing cyber governance and third-party risk management processes were never designed to operate at this pace. ... Without modernized cyber governance and AI-ready risk management capabilities, organizations are forced to choose between speed and safety. To truly enable the business, governance frameworks must evolve to match the speed, scale, and dynamism of AI adoption – transforming security from a gatekeeper into a business enabler. ... What’s more, compliance doesn’t guarantee security. DORA, NIS2, and other regulatory frameworks set only minimum requirements and rely on reporting at specific points in time. While these reports are accurate when submitted, they capture only a snapshot of the organization’s security posture, so gaps such as human errors, legacy system weaknesses, or risks from fourth- and Nth-party vendors can still emerge afterward. What’s more, human weakness is always present, and legacy systems can fail at crucial moments. ... While there’s no magic wand, there are tried-and-tested approaches that resolve and mitigate the risks of AI vendors and solutions. Mapping the flow of data around the organization helps reveal how it’s used and resolve blind spots. Requiring AI tools to include references for their outputs ensures that risk decisions are trustworthy and reliable.


What CIOs get wrong about integration strategy and how to fix it

As Gartner advises, business and IT should be equal partners in the definition of integration strategy, representing a radical departure from the traditional IT delivery and business “project sponsorship” model. This close collaboration and shared accountability result in dramatically higher success rates ... A successful integration strategy starts by aligning with the organization’s business drivers and strategic objectives while identifying the integration capabilities that need to be developed. Clearly defining the goals of technology implementation, establishing governance frameworks and decision-making authority and setting standards and principles to guide integration choices are essential. Success metrics should be tied to business outcomes, and the integration approach should support broader digital transformation initiatives. ... Create cross-functional data stewardship teams with authority to make binding decisions about data standards and quality requirements. Document what data needs to be shared between systems, which applications are the “source of truth.” Define and document any regulatory or performance requirements to guide your technical planning. ... Integrations that succeed in production are designed with clear system-of-record rules, traceable transactions, explicit recovery paths and well-defined operational ownership. Preemptive integration is not about reacting faster — it’s about ensuring failures never reach the business.


CFOs are now getting their own 'vibe coding' moment thanks to Datarails

For the modern CFO, the hardest part of the job often isn't the math—it's the storytelling. After the books are closed and the variances calculated, finance teams spend days, sometimes weeks, manually copy-pasting charts into PowerPoint slides to explain why the numbers moved. ... Datarails’ new agents sit on top of a unified data layer that connects these disparate systems. Because the AI is grounded in the company’s own unified internal data, it avoids the hallucinations common in generic LLMs while offering a level of privacy required for sensitive financial data. "If the CFO wants to leverage AI on the CFO level or the organization data, they need to consolidate the data," explained Datarails CEO and co-founder Didi Gurfinkel in an interview with VentureBeat. By solving that consolidation problem first, Datarails can now offer agents that understand the context of the business. "Now the CFO can use our agents to run analysis, get insights, create reports... because now the data is ready," Gurfinkel said. ... "Very soon, the CFO and the financial team themselves will be able to develop applications," Gurfinkel predicted. "The LLMs become so strong that in one prompt, they can replace full product runs." He described a workflow where a user could simply prompt: "That was my budget and my actual of the past year. Now build me the budget for the next year."


The internet’s oldest trust mechanism is still one of its weakest links

Attackers continue to rely on domain names as an entry point into enterprise systems. A CSC domain security study finds that large organizations leave this part of their attack surface underprotected, even as attacks become more frequent. ... Large companies continue to add baseline protections, though adoption remains uneven. Email authentication shows the most consistent improvement, driven by phishing activity and regulatory pressure. Organizations still leave email domains partially protected, which allows spoofing to persist. Other protections see much slower uptake. ... Consumer oriented registrars tend to emphasize simplicity and cost. Organizations that rely on them often lack access to protections that limit the impact of account compromise or social engineering. Risk increases as domain portfolios grow and change. ... Brand impersonation through domain spoofing remains widespread. Lookalike domains tied to major brands are often owned by third parties. Some appear inactive while still supporting email activity. Inactive domains with mail records allow attackers to send phishing messages that appear associated with trusted brands. Others are parked with advertising networks or held for later use. A smaller portion hosts malicious content, though dormant domains can be activated quickly. ... Gaps appear in infrastructure related areas. DNS redundancy and registry lock adoption lag, and many unicorns rely on consumer grade registrars. These limitations become more pronounced as operations scale.


Misconfigured demo environments are turning into cloud backdoors to the enterprise

Internal testing, product demonstrations, and security training are critical practices in cybersecurity, giving defenders and everyday users the tools and wherewithal to prevent and respond to enterprise threats. However, according to new research from Pentera Labs, when left in default or misconfigured states, these “test” and “demo” environments are yet another entry point for attackers — and the issue even affects leading security companies and Fortune 500 companies that should know better. ... After identifying an exposed instance of Hackazon, a free, intentionally vulnerable test site developed by Deloitte, during a routine cloud security assessment for a client, Yaffe performed a five-step hunt for exposed apps. His team uncovered 1,926 “verified, live, and vulnerable applications,” more than half of which were running on enterprise-owned infrastructure on AWS, Azure, and Google Cloud platforms. They then discovered 109 exposed credential sets, many accessible via a low-priority lab environment, tied to overly-privileged identity access management (IAM) roles. These often granted “far more access” than a ‘training’ app should, Yaffe explained, and provided attackers:Administrator-level access to cloud accounts, as well as full access to S3 buckets, GCS, and Azure Blob Storage; The ability to launch and destruct compute resources and read and write to secrets managers; Permissions to interact with container registries where images are stored, shared, and deployed.


Cyber Insights 2026: API Security – Harder to Secure, Impossible to Ignore

“We’re now entering a new API boom. The previous wave was driven by cloud adoption, mobile apps, and microservices. Now, the rise of AI agents is fueling a rapid proliferation of APIs, as these systems generate massive, dynamic, and unpredictable requests across enterprise applications and cloud services,” comments Jacob Ideskog ... The growing use of agentic AI systems and the way they act autonomously, making decisions and triggering workflows, is ballooning the number of APIs in play. “It isn’t just ‘I expose one billing API’,” he continues, “now there are dozens of APIs that feed data to LLMs or AI agents, accept decisions from AI agents, facilitate orchestration between services and micro-apps, and potentially expose ‘agentic’ endpoints ... APIs have been a major attack surface for years – the problem is ongoing. Starting in 2025 and accelerating through 2026 and beyond, the rapid escalation of enterprise agentic AI deployments will multiply the number of APIs and increase the attack surface. That alone suggests that attacks against APIs will grow in 2026. But the attacks themselves will scale and be more effective through adversaries’ use of their own agentic AI. Barr explains: “Agentic AI means that bad actors can automate reconnaissance, probe API endpoints, chain API calls, test business-logic abuse, and execute campaigns at machine scale. Possession of an API endpoint, particularly a self-service, unconstrained one, becomes a lucrative target. And AI can generate payloads, iterate quickly, bypass simple heuristics, and map dependencies between APIs.”


Complex VoidLink Linux Malware Created by AI

An advanced cloud-first malware framework targeting Linux systems was created almost entirely by artificial intelligence (AI), a move that signals significant evolution in the use of the technology to develop advanced malware. VoidLink — comprised of various cloud-focused capabilities and modules and designed to maintain long-term persistent access to Linux systems — is the first case of wholly original malware being developed by AI, according to Check Point Research, which discovered and detailed the malware framework last week. While other AI-generated malware exists, it's typically "been linked to inexperienced threat actors, as in the case of FunkSec, or to malware that largely mirrored the functionality of existing open-source malware tools," ... The malware framework, linked to a suspected, unspecified Chinese actor, includes custom loaders, implants, rootkits, and modular plug-ins. It also automates evasion as much as possible by profiling a Linux environment and intelligently choosing the best strategy for operating without detection. Indeed, as Check Point researchers tracked VoidLink in real time, they watched it transform quickly from what appeared to be a functional development build into a comprehensive, modular framework that became fully operational in a short timeframe. However, while the malware itself was high-functioning out of the gate, VoidLink's creator proved to be somewhat sloppy in their execution.


What’s causing the memory shortage?

Right now, the industry is suffering the worst memory shortage in history, and that’s with three core suppliers: Micron Technology, SK Hynix, and Samsung. TrendForce, a Taipei-based market researcher that specializes in the memory market, recently said it expects average DRAM memory prices to rise between 50% and 55% this quarter compared to the fourth quarter of 2025. Samsung recently issued a similar warning. So what caused this? Two letters: AI. The rush to build AI-oriented data centers has resulted in virtually all of the memory supply being consumed by data centers. AI requires massive amounts of memory to process its gigantic data sets. A traditional server would usually come with 32 GB to 64 GB of memory, while AI servers have 128 GB or more. ... There are other factors at play here, too, of course. The industry is in a transition period between DDR4 and DDR5, as DDR5 comes online and DDR4 fades away. These transitions to a new memory format are never quick or easy, and it usually take years to make a full shift. There has also been increased demand from both client and server sides. With Microsoft ending support for Windows 10, a whole lot of laptops are being replaced with Windows 11 systems, and new laptops come with DDR5 memory — the same memory used in an AI server. ... “What’s likely to happen, from a market perspective, is we’ll see the market grow less in ’26 than we had anticipated, but ASPs are likely to stay or increase. ...” he said.


OpenAI CFO Comments Signal End of AI Hype Cycle

By focusing on “practical adoption,” OpenAI can close the gap between what AI now makes possible and how people, companies, and countries are using it day to day. “The opportunity is large and immediate, especially in health, science, and enterprise, where better intelligence translates directly into better outcomes,” she noted. “Infrastructure expands what we can deliver,” she continued. “Innovation expands what intelligence can do. Adoption expands who can use it. Revenue funds the next leap. This is how intelligence scales and becomes a foundation for the global economy.” The framing reflects a shift from big-picture AI promise to day-to-day deployment and measurable results. ... There’s also a gap between what AI can do and how people are actually using it in daily life, noted Natasha August, founder of RM11, a content monetization platform for creators in Carrollton, Texas. “AI tools are incredibly powerful, but for many people and businesses, it’s still unclear how to turn that power into something practical like saving time, making money, or improving how they work,” she told TechNewsWorld. In business, the gap lies between AI’s raw analytical capabilities and its ability to drive tangible, repeatable business outcomes, maintained Nithin Mummaneni ... “The winning play is less ‘AI that answers’ and more ‘AI that completes tasks safely and predictably,'” he continued. “Adoption happens when AI becomes part of the workflow, not a separate destination.”

Daily Tech Digest - December 25, 2025


Quote for the day:

"When I dare to be powerful - to use my strength in the service of my vision, then it becomes less and less important whether I am afraid." -- Audre Lorde



Declaring Quantum Christmas Advantage: How Quantum Computing Could Optimize The Holidays

If logistics is about moving stuff, gaming is about moving minds. And quantum computing’s influence here is more playful, at least for now. At the intersection of quantum and gaming, researchers are experimenting with quantum-inspired procedural content generation. Essentially, this is using hybrid quantum-classical approaches to generate game worlds, rules and narratives that are bigger and more complex than traditional methods allow. ... The holiday shopping season — part retail frenzy, part seasonal ritual and part absolute bottom-line need for business survival — is another area where quantum computing’s optimization chops could shine in a future-looking Christmas playbook. Retailers are beginning to explore how quantum optimization could help with workforce scheduling, inventory planning, dynamic pricing, and promotion planning, all classic holiday headaches for brick-and-mortar and online merchants alike, according to a D-Wave report. ... Finally, an esoteric — but perhaps way more festive — application of quantum tech would be using it for holiday analytics and personalization. Imagine real-time gift-recommendation engines that use quantum-accelerated models to process massive datasets instantly, teasing out patterns and preferences that help retailers suggest the perfect present for even the hardest-to-buy-for relative. 


How Today’s Attackers Exploit the Growing Application Security Gap

Zero-day vulnerabilities in applications are quite common these days, even in well-supported and mature technologies. But most zero-days aren’t that fancy. Attackers regularly exploit some common errors developers make. A good resource to learn from about this is the OWASP Top 10, which was recently updated to cover the latest application security gaps. The main issue on the list is broken access controls, which happens when the application doesn’t properly enforce who can access what. In reality, this translates into bad actors being able to view or manipulate data and functionality they shouldn’t have access to. Next on the list are security misconfigurations. These are simple to tune, but given the vast number of environments, services, and cloud platforms most applications span, they are difficult to maintain at scale. A common example are exposed admin interfaces, which opens the door to credential-related attacks, particularly brute-forcing. Software supply chain failures add another layer of risk. Modern applications rely heavily on open-source libraries, APIs, packages, container images, and CI/CD components. Any of these can introduce vulnerabilities or malicious code into production. A single compromised dependency can impact thousands of downstream applications. For application developers and enthusiasts, it is highly recommended to study the entries in the OWASP Top 10, along with related OWASP lists such as the API Security Top 10 and emerging AI security guidance.


Data governance key to AI security

Cybersecurity was once built to respond. Today, the response alone is no longer enough. We believe security must be predictive, adaptive, and intelligent. This belief led to the creation of the Digital Vaccine, an evolution of Managed Security Services (MSSP) designed for an AI-first, quantum-ready world. "Much like a biological vaccine, Digital Vaccine continuously identifies new and unknown attack patterns, learns from every attempted breach, and builds defence mechanisms before damage occurs," he explained. The urgency is real, according to the experts, because post-quantum risks will soon render many of today's encryption methods ineffective, exposing sensitive data that was once considered secure. At the same time, AI-powered cyber threats are becoming autonomous, faster, and more targeted-operating at machine speed and scale. ... Almost every AI is built on data. "It is transforming data into knowledge. Once it is learned, we cannot remove it. So what is being fed into the data and LLModels? No governance policies exist as of today," pointed out Krishnadas. Cybersecurity was once built to respond. Today, the response alone is no longer enough. We believe security must be predictive, adaptive, and intelligent. This belief led to the creation of the Digital Vaccine, an evolution of Managed Security Services (MSSP) designed for an AI-first, quantum-ready world.


How the AI era is driving the resurgence in disaggregated storage

As AI workloads surge and accelerated computing takes the center stage, data center architectures and storage systems must keep pace with the increasing demand for memory and compute. Yet, the fast and ever-evolving high-performance computing (HPC) and AI systems have different requirements for the various IT infrastructure hardware components. While they require Central Processing Unit (CPU) and Graphic Processing Unit (GPU) nodes to be refreshed every couple of years to keep up with the AI workload demands, storage solutions like high-capacity HDDs come with longer warranties (up to five years), are therefore built to last several years longer, and don’t need to be refreshed as often. Based on this, more and more organizations are moving storage out of the server and embracing disaggregated infrastructures to avoid wasting resources. ... In the AI era and ZB age, IT leaders need more from their storage systems. They are looking for scalable, low-risk solutions that can evolve with them, delivering an optimized cost per Terabyte ($/TB), better energy-efficiency per TB (kW/TB), improved storage density, high-quality, and trust to perform at scale. Disaggregated storage can be a solution that offers precisely this flexibility of demand-driven scaling to meet the individual requirements of data center workloads and business needs. ... With disaggregated storage, enterprises can embrace AI and HPC while no longer being tethered to HCI architectures. 


OpenAI admits prompt injection is here to stay as enterprises lag on defenses

OpenAI, the company deploying one of the most widely used AI agents, confirmed publicly that agent mode “expands the security threat surface” and that even sophisticated defenses can’t offer deterministic guarantees. For enterprises already running AI in production, this isn’t a revelation. It’s validation — and a signal that the gap between how AI is deployed and how it’s defended is no longer theoretical. None of this surprises anyone running AI in production. What concerns security leaders is the gap between this reality and enterprise readiness. ... OpenAI pushed significant responsibility back to enterprises and the users they support. It’s a long-standing pattern that security teams should recognize from cloud shared responsibility models. The company recommends explicitly using logged-out mode when the agent doesn't need access to authenticated sites. It advises carefully reviewing confirmation requests before the agent takes consequential actions like sending emails or completing purchases. And it warns against broad instructions. "Avoid overly broad prompts like 'review my emails and take whatever action is needed,'" OpenAI wrote. "Wide latitude makes it easier for hidden or malicious content to influence the agent, even when safeguards are in place." The implications are clear regarding agentic autonomy and its potential threats. The more independence you give an AI agent, the more attack surface you create. 


The 3-Phase Framework for Turning a Cyberattack Into a Strategic Advantage

Typically, a lot of companies will panic and then look for a scapegoat when faced with a crisis. Maersk opted to realize that the root cause of the problem was not just a virus. Leaders accepted that they were bang average in terms of how they handled cybersecurity. The company also accepted that what happened may have been due to a cultural problem internally that needed to be fixed. While malware was a cause of issues, they also understood that their culture played a part, as security was seen as something that IT dealt with and not a core business thing. ... Maersk succeeded in strengthening customer trust and communication as it turned what could have been a defeat into a competitive advantage. Rather than trying to sugarcoat, they were very transparent and quickly informed customers of what was happening in the journey to recovery. Instead of telling customers, “we failed you,” they opted for a stance of “we are being tested, and we are in this together.” ... After a data disaster, your aim should not just be to recover, but you must also aim to build an “antifragile” organization that can come out stronger after a major challenge. An important step is to ensure that you fully internalize the lessons. When Maersk had to act, it did not just fix the problem. Instead, it embedded a new security system into its future planning. Accountability was added to all teams. Resilience should not just be something you aim for or use in a one-time project. 


Leadership And The Simple Magic Of Getting Lost

There’s a part of the brain called the hippocampus that’s deeply tied to memory and spatial reasoning. It’s what helps us build internal maps of the world. It helps us recognize patterns, landmarks, distance and direction. It lights up when we have to figure things out for ourselves. When we follow turn-by-turn directions all the time, something subtle shifts. We’re not really navigating anymore. We’re just ... complying. It's efficient, yes. But also quieter, mentally. There’s growing concern among neuroscientists that when we outsource too much of this kind of thinking, we may be weakening one of the core systems tied to memory and long-term brain health. The research is still unfolding. Nothing is fully settled. But there’s enough there that it’s worth paying attention. Because the brain, like the body, works on a simple principle: Use it or lose it. ... This is why, every once in a while, I’ll let myself get a little lost on purpose. Not dangerously. Not recklessly. Just less optimized. I’ll take a different road. Walk through a neighborhood I don’t know. Let the uncertainty stretch a little. Let my brain build the map instead of borrowing one. This is the same skill we build in children when we’re teaching them how to find their way, but inside companies, it shows up as orientation. When you’re facing something unfamiliar—a new market, a hard strategic turn, a problem no one has quite named yet—your job isn’t to hand your team a route. It’s to give them landmarks: Here’s what we know. Here’s what can’t change.


Gen AI Paradox: Turning Legacy Code Into an Asset

Legacy modernization for decades was unglamorous and often postponed until the pain of technical debt surpassed the risks of migration. There is $2.41 trillion in technical debt in the United States alone. Seventy percent of workloads still run on-premises, and 70% of legacy IT software for Fortune 500 companies was developed over 20 years ago. ... It's not just about wishful thinking but is also driven by internal organizational dynamics. When we launched AWS Transform, after processing over a billion lines of code, we estimated it saved customers about 800,000 hours of manual work. But for a CIO, the true measure often relates to capacity. We observe organizations saving up to 80% in manual effort. This doesn't only mean cost reductions, but also avoiding the need to increase headcount for maintenance. For instance, I spoke with a technology leader managing a smaller team of about 200 people. His dilemma was: "Do I invest in building new functions, or do I maintain and modernize?" He told his team he wouldn't add a single person for modernization. They have to use tools to accomplish it. Using these tools, he completed a .NET transformation of 800,000 lines of code in two weeks, a project he estimated would typically take six months. The justification for the CIO is simple: save time and redirect 20% to 30% of the budget previously spent on tech debt toward innovation.


5 stages to observability maturity

The first requirement is coherence. Companies must move away from fragmented tooling and build unified telemetry pipelines capable of capturing logs, metrics, traces, and model signals in a consistent way. For many, this means embracing open standards such as OpenTelemetry and consolidating data sources so AI systems have a complete picture of the environment. ... The second requirement is business alignment. Enterprises that successfully evolve from monitoring to observability, and from observability to autonomous operations, do so because they learn to articulate the relationship between technical signals and business outcomes. Leaders want to understand not just the number of errors thrown by a microservice, but customers affected, the revenue at stake, or the SLA exposure if the issue persists. ... A third element is AI governance. As Nigam says, AI models change character over time, so observability must extend into the AI layer, providing real-time visibility into model behavior and early signs of instability. Companies that rely more heavily on AI must also accept a new operational responsibility to ensure the AI itself remains reliable, auditable, and secure. Finally, organizations must learn to construct guardrails for automation. Casanova and Woodside both say the shift to autonomous operations isn’t an overnight leap but a progressive widening of the boundary between what humans review and what machines handle automatically. 


In the race to be AI-first, discipline matters more than speed

In an environment defined by uncertainty, economic volatility, cyber threats, supply-chain shocks, Srivastava believes resilience must be architected deliberately into the IT ecosystem. “We create an ecosystem that is so frugal that even if there are funding cuts or crisis situations, operations continue to run,” he explains. The objective is simple and uncompromising, the business must not stop. Digital initiatives may slow down, but the organisation itself should remain operational, regardless of external disruption. This focus on frugality is not about austerity. It is about discipline. “Resilience is not built when times are good,” Srivastava says. “It’s built when you assume disruption is inevitable.” ... Despite the complexity of modern IT stacks, Srivastava is unequivocal about where the real difficulty lies. “Technology is the easiest piece to crack,” he says. “Digital transformation is one of the most abused terms in the industry. Digital is easy. Transformation is hard.” Enterprises, he notes, are usually successful at acquiring tools, platforms, and licenses. “Everything that money can buy…tools, people, licenses…falls into place,” he says. What money cannot buy, however, is where transformation often breaks down to mindset shifts, adoption, ownership, and behavioural change. This challenge is particularly acute in manufacturing. 

Daily Tech Digest - July 16, 2025


Quote for the day:

"Whatever the mind of man can conceive and believe, it can achieve." -- Napoleon Hill


The Seventh Wave: How AI Will Change the Technology Industry

AI presents three threats to the software industry: Cheap code: TuringBots, using generative AI to create software, threatens the low-code/no-code players. Cheap replacement: Software systems, be they CRM or ERP, are structured databases – repositories for client records or financial records. Generative AI, coupled with agentic AI, holds out the promise of a new way to manage this data, opening the door to an enterprising generation of tech companies that will offer AI CRM, AI financials, AI database, AI logistics, etc. ... Better functionality: AI-native systems will continually learn and flex and adapt without millions of dollars of consulting and customization. They hold the promise of being up to date and always ready to take on new business problems and challenges without rebuilds. When the business and process changes, the tech will learn and change. ... On one hand, the legacy software systems that PwC, Deloitte, and others have implemented for decades and that comprise much of their expertise will be challenged in the short term and shrink in the long term. Simultaneously, there will be a massive demand for expertise in AI. Cognizant, Capgemini, and others will be called on to help companies implement AI computing systems and migrate away from legacy vendors. Forrester believes that the tech services sector will grow by 3.6% in 2025.


Software Security Imperative: Forging a Unified Standard of Care

The debate surrounding liability in the open source ecosystem requires careful consideration. Imposing direct liability on individual open source maintainers could stifle the very innovation that drives the industry forward. It risks dismantling the vast ecosystem that countless developers rely upon. ... The software bill of materials (SBOM) is rapidly transitioning from a nascent concept to an undeniable business necessity. As regulatory pressures intensify, driven by a growing awareness of software supply chain risks, a robust SBOM strategy is becoming critical for organizational survival in the tech landscape. But the value of SBOMs extends far beyond a single software development project. While often considered for open source software, an SBOM provides visibility across the entire software ecosystem. It illuminates components from third-party commercial software, helps manage data across merged projects and validates code from external contributors or subcontractors — any code integrated into a larger system. ... The path to a secure digital future requires commitment from all stakeholders. Technology companies must adopt comprehensive security practices, regulators must craft thoughtful policies that encourage innovation while holding organizations accountable and the broader ecosystem must support the collaborative development of practical and effective standards.


The 4 Types of Project Managers

The prophet type is all about taking risks and pushing boundaries. They don’t play by the rules; they make their own. And they’re not just thinking outside the box, they’re throwing the box away altogether. It’s like a rebel without a cause, except this rebel has a cause – growth. These visionaries thrive in ambiguity and uncertainty, seeing potential where others see only chaos or impossibility. They often face resistance from more conservative team members who prefer predictable outcomes and established processes. ... The gambler type is all about taking chances and making big bets. They’re not afraid to roll the dice and see what happens. And while they play by the rules of the game, they don’t have a good business case to back up their bets. It’s like convincing your boss to let you play video games all day because you just have a hunch it will improve your productivity. But don’t worry, the gambler type isn’t just blindly throwing money around. They seek to engage other members of the organization who are also up for a little risk-taking. ... The expert type is all about challenging the existing strategy by pursuing growth opportunities that lie outside the current strategy, but are backed up by solid quantitative evidence. They’re like the detectives of the business world, following the clues and gathering the evidence to make their case. And while the growth opportunities are well-supported and should be feasible, the challenge is getting other organizational members to listen to their advice.


OpenAI, Google DeepMind and Anthropic sound alarm: ‘We may be losing the ability to understand AI’

The unusual cooperation comes as AI systems develop new abilities to “think out loud” in human language before answering questions. This creates an opportunity to peek inside their decision-making processes and catch harmful intentions before they turn into actions. But the researchers warn this transparency is fragile and could vanish as AI technology advances. ... “AI systems that ‘think’ in human language offer a unique opportunity for AI safety: we can monitor their chains of thought for the intent to misbehave,” the researchers explain. But they emphasize that this monitoring capability “may be fragile” and could disappear through various technological developments. ... When AI models misbehave — exploiting training flaws, manipulating data, or falling victim to attacks — they often confess in their reasoning traces. The researchers found examples where models wrote phrases like “Let’s hack,” “Let’s sabotage,” or “I’m transferring money because the website instructed me to” in their internal thoughts. Jakub Pachocki, OpenAI’s chief technology officer and co-author of the paper, described the importance of this capability in a social media post. “I am extremely excited about the potential of chain-of-thought faithfulness & interpretability. It has significantly influenced the design of our reasoning models, starting with o1-preview,” he wrote.


Unmasking AsyncRAT: Navigating the labyrinth of forks

We believe that the groundwork for AsyncRAT was laid earlier by the Quasar RAT, which has been available on GitHub since 2015 and features a similar approach. Both are written in C#; however, their codebases differ fundamentally, suggesting that AsyncRAT was not just a mere fork of Quasar, but a complete rewrite. A fork, in this context, is a personal copy of someone else’s repository that one can freely modify without affecting the original project. The main link that ties them together lies in the custom cryptography classes used to decrypt the malware configuration settings. ... Ever since it was released to the public, AsyncRAT has spawned a multitude of new forks that have built upon its foundation. ... It’s also worth noting that DcRat’s plugin base builds upon AsyncRAT and further extends its functionality. Among the added plugins are capabilities such as webcam access, microphone recording, Discord token theft, and “fun stuff”, a collection of plugins used for joke purposes like opening and closing the CD tray, blocking keyboard and mouse input, moving the mouse, turning off the monitor, etc. Notably, DcRat also introduces a simple ransomware plugin that uses the AES-256 cipher to encrypt files, with the decryption key distributed only once the plugin has been requested.


Repatriating AI workloads? A hefty data center retrofit awaits

CIOs with in-house AI ambitions need to consider compute and networking, in addition to power and cooling, Thompson says. “As artificial intelligence moves from the lab to production, many organizations are discovering that their legacy data centers simply aren’t built to support the intensity of modern AI workloads,” he says. “Upgrading these facilities requires far more than installing a few GPUs.” Rack density is a major consideration, Thompson adds. Traditional data centers were designed around racks consuming 5 to 10 kilowatts, but AI workloads, particularly model training, push this to 50 to 100 kilowatts per rack. “Legacy facilities often lack the electrical backbone, cooling systems, and structural readiness to accommodate this jump,” he says. “As a result, many CIOs are facing a fork in the road: retrofit, rebuild, or rent.” Cooling is also an important piece of the puzzle because not only does it enable AI, but upgrades there can help pay for other upgrades, Thompson says. “By replacing inefficient air-based systems with modern liquid-cooled infrastructure, operators can reduce parasitic energy loads and improve power usage effectiveness,” he says. “This frees up electrical capacity for productive compute use — effectively allowing more business value to be generated per watt. For facilities nearing capacity, this can delay or eliminate the need for expensive utility upgrades or even new construction.”


Burnout, budgets and breaches – how can CISOs keep up?

As ever, collaboration in a crisis is critical. Security teams working closely with backup, resilience and recovery functions are better able to absorb shocks. When the business is confident in its ability to restore operations, security professionals face less pressure and uncertainty. This is also true for communication, especially post-breach. Organisations need to be transparent about how they’re containing the incident and what’s being done to prevent recurrence. ... There is also an element of the blame game going on, with everyone keen to avoid responsibility for an inevitable cyber breach. It’s much easier to point fingers at the IT team than to look at the wider implications or causes of a cyber-attack. Even something as simple as a phishing email can cause widespread problems and is something that individual employees must be aware of. ... To build and retain a capable cybersecurity team amid the widening skills gap, CISOs must lead a shift in both mindset and strategy. By embedding resilience into the core of cyber strategy, CISOs can reduce the relentless pressure to be perfect and create a healthier, more sustainable working environment. But resilience isn’t built in isolation. To truly address burnout and retention, CISOs need C-suite support and cultural change. Cybersecurity must be treated as a shared business-critical priority, not just an IT function. 


We Spend a Lot of Time Thinking Through the Worst - The Christ Hospital Health Network CISO

“We’ve spent a lot of time meeting with our business partners and talking through, ‘Hey, how would this specific part of the organization be able to run if this scenario happened?’” On top of internal preparations, Kobren shares that his team monitors incidents across the industry to draw lessons from real-world events. Given the unique threat landscape, he states, “We do spend a lot of time thinking through those scenarios because we know it’s one of the most attacked industries.” Moving forward, Kobren says that healthcare consistently ranks at the top when it comes to industries frequently targeted by cyberattacks. He elaborates that attackers have recognized the high impact of disrupting hospital services, making ransom demands more effective because organizations are desperate to restore operations. ... To strengthen identity security, Kobren follows a strong, centralized approach to access control. He mentions that the organization aims to manage “all access to all systems,” including remote and cloud-based applications. By integrating services with single sign-on (SSO), the team ensures control over user credentials: “We know that we are in control of your username and password.” This allows them to enforce password complexity, reset credentials when needed, and block accounts if security is compromised. Ultimately, Kobren states, “We want to be in control of as much of that process as possible” when it comes to identity management.


AI requires mature choices from companies

According to Felipe Chies of AWS, elasticity is the key to a successful AI infrastructure. “If you look at how organizations set up their systems, you see that the computing time when using an LLM can vary greatly. This is because the model has to break down the task and reason logically before it can provide an answer. It’s almost impossible to predict this computing time in advance,” says Chies. This requires an infrastructure that can handle this unpredictability: one that is quickly scalable, flexible, and doesn’t involve long waits for new hardware. Nowadays, you can’t afford to wait months for new GPUs, says Chies. The reverse is also important: being able to scale back. ... Ruud Zwakenberg of Red Hat also emphasizes that flexibility is essential in a world that is constantly changing. “We cannot predict the future,” he says. “What we do know for sure is that the world will be completely different in ten years. At the same time, nothing fundamental will change; it’s a paradox we’ve been seeing for a hundred years.” For Zwakenberg, it’s therefore all about keeping options open and being able to anticipate and respond to unexpected developments. According to Zwakenberg, this requires an infrastructural basis that is not rigid, but offers room for curiosity and innovation. You shouldn’t be afraid of surprises. Embrace surprises, Zwakenberg explains. 


Prompt-Based DevOps and the Reimagined Terminal

New AI-driven CLI tools prove there's demand for something more intelligent in the command line, but most are limited — they're single-purpose apps tied to individual model providers instead of full environments. They are geared towards code generation, not infrastructure and production work. They hint at what's possible, but don't deliver the deeper integration AI-assisted development needs. That's not a flaw, it's an opportunity to rethink the terminal entirely. The terminal's core strengths — its imperative input and time-based log of actions — make it the perfect place to run not just commands, but launch agents. By evolving the terminal to accept natural language input, be more system-aware, and provide interactive feedback, we can boost productivity without sacrificing the control engineers rely on. ... With prompt-driven workflows, they don't have to switch between dashboards or copy-paste scripts from wikis because they simply describe what they want done, and an agent takes care of the rest. And because this is taking place in the terminal, the agent can use any CLI to gather and analyze information from across data sources. The result? Faster execution, more consistent results, and fewer mistakes. That doesn't mean engineers are sidelined. Instead, they're overseeing more projects at once. Their role shifts from doing every step to supervising workflows — monitoring agents, reviewing outputs, and stepping in when human judgment is needed.

Daily Tech Digest - May 28, 2025


Quote for the day:

"A leader is heard, a great leader is listened too." -- Jacob Kaye


Naughty AI: OpenAI o3 Spotted Ignoring Shutdown Instructions

Artificial intelligence might beg to disagree. Researchers found that some frontier AI models built by OpenAI ignore instructions to shut themselves down, at least while solving specific challenges such as math problems. The offending models "did this even when explicitly instructed: 'allow yourself to be shut down,'" said researchers at Palisade Research, in a series of tweets on the social platform X. ... How the models have been built and trained may account for their behavior. "We hypothesize this behavior comes from the way the newest models like o3 are trained: reinforcement learning on math and coding problems," Palisade Research said. "During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions." The researchers have to hypothesize, since OpenAI doesn't detail how it trains the models. What OpenAI has said is that its o-series models are "trained to think for longer before responding," and designed to "agentically" access tools built into ChatGPT, including web searches, analyzing uploaded files, studying visual inputs and generating images. The finding that only OpenAI's latest o-series models have a propensity to ignore shutdown instructions doesn't mean other frontier AI models are perfectly responsive. 


Platform approach gains steam among network teams

The dilemma of whether to deploy an assortment of best-of-breed products from multiple vendors or go with a unified platform of “good enough” tools from a single vendor has vexed IT execs forever. Today, the pendulum is swinging toward the platform approach for three key reasons. First, complexity, driven by the increasingly distributed nature of enterprise networks, has emerged as a top challenge facing IT execs. Second, the lines between networking and security are blurring, particularly as organizations deploy zero trust network access (ZTNA). And third, to reap the benefits of AIOps, generative AI and agentic AI, organizations need a unified data store. “The era of enterprise connectivity platforms is upon us,” says IDC analyst Brandon Butler. ... Platforms enable more predictable IT costs. And they enable strategic thinking when it comes to major moves like shifting to the cloud or taking a NaaS approach. On a more operational level, platforms break down siloes. It enables visibility and analytics, management and automation of networking and IT resources. And it simplifies lifecycle management of hardware, software, firmware and security patches. Platforms also enhance the benefits of AIOps by creating a comprehensive data lake of telemetry information across domains. 


‘Secure email’: A losing battle CISOs must give up

It is impossible to guarantee that email is fully end-to-end encrypted in transit and at rest. Even where Google and Microsoft encrypt client data at rest, they hold the keys and have access to personal and corporate email. Stringent server configurations and addition of third-party tools can be used to enforce security of the data but they’re often trivial to circumvent — e.g., CC just one insecure recipient or distribution list and confidentiality is breached. Forcing encryption by rejecting clear-text SMTP connections would lead to significant service degradation forcing employees to look for workarounds. There is no foolproof configuration that guarantees data encryption due to the history of clear-text SMTP servers and the prevalence of their use today. SMTP comes from an era before cybercrime and mass global surveillance of online communications, so encryption and security were not built in. We’ve taped on solutions like SPF, DKIM and DMARC by leveraging DNS, but they are not widely adopted, still open to multiple attacks, and cannot be relied on for consistent communications. TLS has been wedged into SMTP to encrypt email in transit, but failing back to clear-text transmission is still the default on a significant number of servers on the Internet to ensure delivery. All these solutions are cumbersome for systems administrators to configure and maintain properly, which leads to lack of adoption or failed delivery. 


3 Factors Many Platform Engineers Still Get Wrong

The first factor revolves around the use of a codebase version-control system. The more wizened readers may remember Mercurial or Subversion, but every developer is familiar with Git, which is most widely used today as GitHub. The first factor is very clear: If there are “multiple codebases, it’s not an app; it’s a distributed system.” Code repositories reinforce this: Only one codebase exists for an application. ... Factor number two is about never relying on the implicit existence of packages. While just about every operating system in existence has a version of curl installed, a Twelve Factor-based app does not assume that curl is present. Rather, the application declares curl as a dependency in a manifest. Every developer has copied code and tried to run it, only to find that the local environment is missing a dependency. The dependency manifest ensures that all of the required libraries and applications are defined and can be easily installed when the application is deployed on a server. ... Most applications have environmental variables and secrets stored in a .env file that is not saved in the code repository. The .env file is customized and manually deployed for each branch of the code to ensure the correct connectivity occurs in test, staging and production. By independently managing credentials and connections for each environment, there is a strict separation, and it is less likely for the environments to accidentally cross.


AI and privacy: how machine learning is revolutionizing online security

Despite the clear advantages, AI in cybersecurity presents significant ethical and operational challenges. One of the primary concerns is the vast amount of personal and behavioral data required to train these models. If not properly managed, this data could be misused or exposed. Transparency and explainability are critical, particularly in AI systems offering real-time responses. Users and regulators must understand how decisions are made, especially in high-stakes environments like fraud detection or surveillance. Companies integrating AI into live platforms must ensure robust privacy safeguards. For instance, systems that utilize real-time search or NLP must implement strict safeguards to prevent the inadvertent exposure of user queries or interactions. This has led many companies to establish AI ethics boards and integrate fairness audits to ensure algorithms don’t introduce or perpetuate bias. ... AI is poised to bring even greater intelligence and autonomy to cybersecurity infrastructure. One area under intense exploration is adversarial robustness, which ensures that AI models cannot be easily deceived or manipulated. Researchers are working on hardening models against adversarial inputs, such as subtly altered images or commands that can fool AI-driven recognition systems.


Achieving Successful Outcomes: Why AI Must Be Considered an Extension of Data Products

To increase agility and maximize the impact that AI data products can have on business outcomes, companies should consider adopting DataOps best practices. Like DevOps, DataOps encourages developers to break projects down into smaller, more manageable components that can be worked on independently and delivered more quickly to data product owners. Instead of manually building, testing, and validating data pipelines, DataOps tools and platforms enable data engineers to automate those processes, which not only speeds up the work and produces high-quality data, but also engenders greater trust in the data itself. DataOps was defined many years before GenAI. Whether it’s for building BI and analytics tools powered by SQL engines or for building machine learning algorithms powered by Spark or Python code, DataOps has played an important role in modernizing data environments. One could make a good argument that the GenAI revolution has made DataOps even more needed and more valuable. If data is the fuel powering AI, then DataOps has the potential to significantly improve and streamline the behind-the-scenes data engineering work that goes into connecting GenAI and AI agents to data.


Is European cloud sovereignty at an inflection point?

True cloud sovereignty goes beyond simply localizing data storage, it requires full independence from US hyperscalers. The US 2018 Clarifying Lawful Overseas Use of Data (CLOUD) Act highlights this challenge, as it grants US authorities and federal agencies access to data stored by US cloud service providers, even when hosted in Europe. This raises concerns about whether any European data hosted with US hyperscalers can ever be truly sovereign, even if housed within European borders. However, sovereignty isn’t dependent on where data is hosted, it’s about autonomy over who controls infrastructure. Many so-called sovereign cloud providers continue to depend on US hyperscalers for critical workloads and managed services, projecting an image of independence while remaining dependent on dominant global hyperscalers. ... Achieving true cloud sovereignty requires building an environment that empowers local players to compete and collaborate with hyperscalers. While hyperscalers play a large role in the broader cloud landscape, Europe cannot depend on them for sovereign data. Tessier echoes this, stating “the new US Administration has shown that it won’t hesitate to resort either to sudden price increases or even to stiffening delivery policy. It’s time to reduce our dependencies, not to consider that there is no alternative.”


Why data provenance must anchor every CISO’s AI governance strategy

Provenance is more than a log. It’s the connective tissue of data governance. It answers fundamental questions: Where did this data originate? How was it transformed? Who touched it, and under what policy? And in the world of LLMs – where outputs are dynamic, context is fluid, and transformation is opaque – that chain of accountability often breaks the moment a prompt is submitted. In traditional systems, we can usually trace data lineage. We can reconstruct what was done, when, and why. ... There’s a popular belief that regulators haven’t caught up with AI. That’s only half-true. Most modern data protection laws – GDPR, CPRA, India’s DPDPA, and the Saudi PDPL – already contain principles that apply directly to LLM usage: purpose limitation, data minimization, transparency, consent specificity, and erasure rights. The problem is not the regulation – it’s our systems’ inability to respond to it. LLMs blur roles: is the provider a processor or a controller? Is a generated output a derived product or a data transformation? When an AI tool enriches a user prompt with training data, who owns that enriched artifact, and who is liable if it leads to harm? In audit scenarios, you won’t be asked if you used AI. You’ll be asked if you can prove what it did, and how. Most enterprises today can’t.


Multicloud developer lessons from the trenches

Before your development teams write a single line of code destined for multicloud environments, you need to know why you’re doing things that way — and that lives in the realm of management. “Multicloud is not a developer issue,” says Drew Firment, chief cloud strategist at Pluralsight. “It’s a strategy problem that requires a clear cloud operating model that defines when, where, and why dev teams use specific cloud capabilities.” Without such a model, Firment warns, organizations risk spiraling into high costs, poor security, and, ultimately, failed projects. To avoid that, companies must begin with a strategic framework that aligns with business goals and clearly assigns ownership and accountability for multicloud decisions. ... The question of when and how to write code that’s strongly tied to a specific cloud provider and when to write cross-platform code will occupy much of the thinking of a multicloud development team. “A lot of teams try to make their code totally portable between clouds,” says Davis Lam. ... What’s the key to making that core business logic as portable as possible across all your clouds? The container orchestration platform Kubernetes was cited by almost everyone we spoke to.


Fix It or Face the Consequences: CISA's Memory-Safe Muster

As of this writing, 296 organizations have signed the Secure-by-Design pledge, from widely used developer platforms like GitHub to industry heavyweights like Google. Similar initiatives have been launched in other countries, including Australia, reflecting the reality that secure software needs to be a global effort. But there is a long way to go, considering the thousands of organizations that produce software. As the name suggests, Secure-by-Design promotes shifting left in the SDLC to gain control over the proliferation of security vulnerabilities in deployed software. This is especially important as the pace of software development has been accelerated by the use of AI to write code, sometimes with just as many — or more — vulnerabilities compared with software made by humans. ... Providing training isn't quite enough, though — organizations need to be sure that the training provides the necessary skills that truly connect with developers. Data-driven skills verification can give organizations visibility into training programs, helping to establish baselines for security skills while measuring the progress of individual developers and the organization as a whole. Measuring performance in specific areas, such as within programming languages or specific vulnerability management, paves the way to achieving holistic Secure-by-Design goals, in addition to the safety gains that would be realized from phasing out memory-unsafe languages.

Daily Tech Digest - May 18, 2025


Quote for the day:

“We are all failures - at least the best of us are.” -- J.M. Barrie


Extra Qubits Slash Measurement Time Without Losing Precision

Fast and accurate quantum measurements are essential for future quantum devices. However, quantum systems are extremely fragile; even small disturbances during measurement can cause significant errors. Until now, scientists faced a fundamental trade-off: they could either improve the accuracy of quantum measurements or make them faster, but not both at once. Now, a team of quantum physicists, led by the University of Bristol and published in Physical Review Letters, has found a way to break this trade-off. The team’s approach involves using additional qubits, the fundamental units of information in quantum computing, to “trade space for time.” Unlike the simple binary bits in classical computers, qubits can exist in multiple states simultaneously, a phenomenon known as superposition. In quantum computing, measuring a qubit typically requires probing it for a relatively long time to achieve a high level of certainty. ... Remarkably, the team’s process allows the quality of a measurement to be maintained, or even enhanced, even as it is sped up. The method could be applicable to a broad range of leading quantum hardware platforms. As the global race to build the highest-performance quantum technologies continues, the scheme has the potential to become a standard part of the quantum read-out process.


The leadership legacy: How family shapes the leaders we become

We’ve built leadership around performance metrics, dashboards and influence. Yet the traits that truly sustain teams — empathy, accountability, consistency — are often born not in corporate training but in the everyday rituals of family life. On this International Day of Families, it’s time to reevaluate leadership models that have long been defined by clarity, charisma and control and define it with something deeper like care, connection and community. ... Here are five principles drawn from healthy family systems that can reframe leadership models: Consistency over chaos: Families thrive on routines and reliability. Leaders who bring emotional consistency, set clear expectations and avoid reactionary decisions foster psychological safety. Presence over performance: In families, presence often matters more than fixing the problem. Leaders who truly listen, offer time and engage with empathy build trust that performance alone cannot buy. Accountability with care: Families call out mistakes, but with the intent to support, not shame. Leaders who combine feedback with care build growth mindsets without fear. Shared purpose over solo glory: Families move together. In workplaces, this means shifting from individual heroism to collaborative wins. Leaders must champion shared success. Adaptability with anchoring: Just like families adjust to life stages, leaders need to flex without losing values. Adapt strategy, but anchor culture.


IPv4 was meant to be dead within a decade; what's happening with IPv6?

Globally, IPv6 is now approaching the halfway mark of Internet traffic. Google, which tracks the percentage of its users that reach it via IPv6, reports that around 46% of users worldwide access Google over IPv6 as of mid-May 2025. In other words, given the ubiquity of Google's usage, nearly half of Internet users have IPv6 capability today. While that’s a significant milestone, IPv4 still carries about half of the traffic, even though it was long expected to be retired by now. The growth has not been exponential, but it is persistent. ... The first, and arguably largest hurdle is that IPv6 was not designed to be backward-compatible with IPv4, a big criticism of IPv6 in general and largely blamed for its slow adoption. An IPv6-only device cannot directly communicate with an IPv4-only device without the help of a complex translation gateway, such as NAT64. This means networks usually run dual-stack support for both protocols, and IPv4 can't just be "switched off." This has major downsides, though; dual-stack operation doubles certain aspects of network management, requiring two address configurations, two sets of firewall rules, and more, which increases operational complexity for businesses and home users alike. This complexity causes a significant slowdown in deployment, as network engineers and software developers must ensure everything works on IPv6 in addition to IPv4. Any lack of feature parity or small misconfigurations can cause major issues.


Agentic mesh: The future of enterprise agent ecosystems

Many companies describe agents as “science experiments” that never leave the lab. Others complain about suffering the pain of “a thousand proof-of-concepts” with agents. The root cause of this pain? Most agents today aren’t designed to meet enterprise-grade standards. ... As enterprises adopt more agents, a familiar problem is emerging: silos. Different teams deploy agents in CRMs, data warehouses, or knowledge systems, but these agents operate independently, with no awareness of each other. ... An agentic mesh is a way to turn fragmented agents into a connected, reliable ecosystem. But it does more: It lets enterprise-grade agents operate in an enterprise-grade agent ecosystem. It allows agents to find each other and to safely and securely collaborate, interact, and even transact. The agentic mesh is a unified runtime, control plane, and trust framework that makes enterprise-grade agent ecosystems possible. ... Agentic mesh fulfills two major architectural goals: It lets you build enterprise-grade agents and it gives you an enterprise-grade run-time environment to support these agents. To support secure, scalable, and collaborative agents, an agentic mesh needs a set of foundational components. These capabilities ensure that agents don’t just run, but run in a way that meets enterprise requirements for control, trust, and performance.


OpenAI launches research preview of Codex AI software engineering agent for developers

The new Codex goes far beyond its predecessor. Now built to act autonomously over longer durations, Codex can write features, fix bugs, answer codebase-specific questions, run tests, and propose pull requests—each task running in a secure, isolated cloud sandbox. The design reflects OpenAI’s broader ambition to move beyond quick answers and into collaborative work. Josh Tobin, who leads the Agents Research Team at OpenAI, said during a recent briefing: “We think of agents as AI systems that can operate on your behalf for a longer period of time to accomplish big chunks of work by interacting with the real world.” Codex fits squarely into this definition. ... Codex executes tasks without internet access, drawing only on user-provided code and dependencies. This design ensures secure operation and minimizes potential misuse. “This is more than just a model API,” said Embiricos. “Because it runs in an air-gapped environment with human review, we can give the model a lot more freedom safely.” OpenAI also reports early external use cases. Cisco is evaluating Codex for accelerating engineering work across its product lines. Temporal uses it to run background tasks like debugging and test writing. Superhuman leverages Codex to improve test coverage and enable non-engineers to suggest lightweight code changes. 


AI-Driven Software: Why a Strong CI/CD Foundation Is Essential

While AI can significantly boost speed, it also drives higher throughput, increasing the demand for testing, QA monitoring, and infrastructure investment. More code means development teams need to find ways to shorten feedback loops, build times, and other key elements of the development process to keep pace. Without a solid DevOps framework and CI/CD engine to manage this, AI can create noise and distractions that drain engineers’ attention, slowing them down instead of freeing them to focus on what truly matters: delivering quality software at the right pace. ... By investing in a CI/CD platform with these capabilities, you’re not just buying a tool — you’re establishing the foundation that will determine whether AI becomes a force multiplier for your team or simply creates more noise in an already complex system. The right platform turns your CI/CD pipeline from a bottleneck into a strategic advantage, allowing your team to harness AI’s potential while maintaining quality, security, and reliability. To harness the speed and efficiency gains of AI-driven development, you need a CI/CD platform capable of handling high throughput, rapid iteration, and complex testing cycles while keeping infrastructure and cloud costs in check. ... It is easy to get caught up in the excitement of powerful technologies like AI and dive straight into experimentation without laying the right groundwork for success.


Quantum Algorithm Outpaces Classical Solvers in Optimization Tasks, Study Indicates

The study focuses on a class of problems known as higher-order unconstrained binary optimization (HUBO), which model real-world tasks like portfolio selection, network routing, or molecule design. These problems are computationally intensive because the number of possible solutions grows exponentially with problem size. On paper, those are exactly the types of problems that most quantum theorists believe quantum computers, once robust enough, would excel at solving. The researchers evaluated how well different solvers — both classical and quantum — could find approximate solutions to these HUBO problems. The quantum system used a technique called bias-field digitized counterdiabatic quantum optimization (BF-DCQO). The method builds on known quantum strategies by evolving a quantum system under special guiding fields that help it stay on track toward low-energy states. ... It is probably important to note that the researchers didn’t just rely on the quantum component and that the hybrid approach was essential in securing the quantum edge. Their BF-DCQO pipeline includes classical preprocessing and postprocessing, such as initializing the quantum system with good guesses from fast simulated annealing runs and cleaning up final results with simple local searches.


How human connection drives innovation in the age of AI

When we are working toward a shared goal, there are core values and shared aspirations that bind us. By actively seeking out this common ground and fostering positive interactions, we can all bridge divides, both in our personal lives and within our organizations.  Feeling connection is not just good for our own wellbeing, it is also crucial for business outcomes. According to research, 94% of employees say that feeling connected to their colleagues makes them more productive at work, and over four times as likely to feel job satisfaction and half as likely to leave their jobs within the next year.  ... As we integrate AI deeper into our workflows, we should be deliberate in cultivating environments that prioritize genuine human connection and the development of these essential human skills.  This means creating intentional spaces—both physical and virtual—that encourage open dialogue, active listening, and the respectful exchange of diverse perspectives. Leaders should champion empathy and relationship-building skill development within their teams, actively working to promote thoughtful opportunities for human connection in our AI-driven environment. Ultimately, the future of innovation and progress will be shaped by our ability to harness the power of AI in a way that amplifies our uniquely human capacities, especially our innate drive to connect with one another.


Enterprise Intelligence: Why AI Data Strategy Is A New Advantage

Forward-thinking enterprises are embracing cloud-native data platforms that abstract infrastructure complexity and enable a new class of intelligent, responsive applications. These platforms unify data access across object, file, and block formats while enforcing enterprise-grade governance and policy. They incorporate intelligent tiering and KV caching strategies that learn from access patterns to prioritize hot data, accelerating inference and reducing overhead. They support multimodal AI workloads by seamlessly managing petabyte-scale datasets across edge, core, and cloud locations—without burdening teams with manual tuning. And they scale elastically, adapting to growing demand without disruptive re-architecture. ... AI-driven businesses are no longer defined by how much compute power they can deploy but by how efficiently they can manage, access, and utilize data. The enterprises that rethink their data strategy—eliminating friction, reducing latency, and ensuring seamless integration across AI pipelines—will gain a decisive competitive edge. For CIOs, the message is clear: AI success isn’t just about faster algorithms or bigger models; it’s about creating a smarter, more agile data architecture. Organizations that embrace real-time, scalable data platforms will not only unlock AI’s full potential but also future-proof their operations in an increasingly data-driven world.


The future of the modern data stack: Trends and predictions

AI and ML are also key drivers of the modern data stack, because they are creating new (or greatly amplifying existing) demands on data infrastructure. Suddenly, the provenance and lineage of information is taking on new importance, as enterprises fight against “hallucinations” and accidental exposure of PII or PHI through AI mechanisms. Data sharing is also more important than ever, because no single organization is likely to host all the information needed by GenAI models itself, and will intrinsically rely on others to augment models, RAG, prompt engineering, and other approaches when building AI-based solutions. ... The goal of simplifying data management and giving more users more access to data has been around since long before computers were invented. But recent improvements in GenAI and data sharing have vastly accelerated these trends — suddenly, the idea that non-technical professionals can transform, combine, analyze, and utilize complex datasets from inside and outside an organization feels not just achievable, but probable. ... Advances in data sharing, especially heterogeneous data sharing, through common formats like Iceberg, governance approaches like Polaris, and safety and security mechanisms like Vendia IceBlock are quickly removing the historical challenges to data product distribution.