Daily Tech Digest - May 22, 2025


Quote for the day:

"Knowledge is being aware of what you can do. Wisdom is knowing when not to do it." -- Anonymous


Consumer rights group: Why a 10-year ban on AI regulation will harm Americans

AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report. These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use. ... Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place. But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause. It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems.


Putting agentic AI to work in Firebase Studio

An AI assistant is like power steering. The programmer, the driver, remains in control, and the tool magnifies that control. The developer types some code, and the assistant completes the function, speeding up the process. The next logical step is to empower the assistant to take action—to run tests, debug code, mock up a UI, or perform some other task on its own. In Firebase Studio, we get a seat in a hosted environment that lets us enter prompts that direct the agent to take meaningful action. ... Obviously, we are a long way off from a non-programmer frolicking around in Firebase Studio, or any similar AI-powered development environment, and building complex applications. Google Cloud Platform, Gemini, and Firebase Studio are best-in-class tools. These kinds of limits apply to all agentic AI development systems. Still, I would in no wise want to give up my Gemini assistant when coding. It takes a huge amount of busy work off my shoulders and brings much more possibility into scope by letting me focus on the larger picture. I wonder how the path will look, how long it will take for Firebase Studio and similar tools to mature. It seems clear that something along these lines, where the AI is framed in a tool that lets it take action, is part of the future. It may take longer than AI enthusiasts predict. It may never really, fully come to fruition in the way we envision.


Edge AI + Intelligence Hub: A Match in the Making

The shop floor looks nothing like a data lake. There is telemetry data from machines, historical data, MES data in SQL, some random CSV files, and most of it lacks context. Companies that realize this—or already have an Industrial DataOps strategy—move quickly beyond these issues. Companies that don’t end up creating a solution that works with only telemetry data (for example) and then find out they need other data. Or worse, when they get something working in the first factory, they find out factories 2, 3, and 4 have different technology stacks. ... In comes DataOps (again). Cloud AI and Edge AI have the same problems with industrial data. They need access to contextualized information across many systems. The only difference is there is no data lake in the factory—but that’s OK. DataOps can leave the data in the source systems and expose it over APIs, allowing edge AI to access the data needed for specific tasks. But just like IT, what happens if OT doesn’t use DataOps? It’s the same set of issues. If you try to integrate AI directly with data from your SCADA, historian, or even UNS/MQTT, you’ll limit the data and context to which the agent has access. SCADA/Historians only have telemetry data. UNS/MQTT is report by exception, and AI is request/response based (i.e., it can’t integrate). But again, I digress. Use DataOps.


AI-driven threats prompt IT leaders to rethink hybrid cloud security

Public cloud security risks are also undergoing renewed assessment. While the public cloud was widely adopted during the post-pandemic shift to digital operations, it is increasingly seen as a source of risk. According to the survey, 70 percent of Security and IT leaders now see the public cloud as a greater risk than any other environment. As a result, an equivalent proportion are actively considering moving data back from public to private cloud due to security concerns, and 54 percent are reluctant to use AI solutions in the public cloud citing apprehensions about intellectual property protection. The need for improved visibility is emphasised in the findings. Rising sophistication in cyberattacks has exposed the limitations of existing security tools—more than half (55 percent) of Security and IT leaders reported lacking confidence in their current toolsets' ability to detect breaches, mainly due to insufficient visibility. Accordingly, 64 percent say their primary objective for the next year is to achieve real-time threat monitoring through comprehensive real-time visibility into all data in motion. David Land, Vice President, APAC at Gigamon, commented: "Security teams are struggling to keep pace with the speed of AI adoption and the growing complexity of and vulnerability of public cloud environments. 


Taming the Hacker Storm: Why Millions in Cybersecurity Spending Isn’t Enough

The key to taming the hacker storm is founded on the core principle of trust: that the individual or company you are dealing with is who or what they claim to be and behaves accordingly. Establishing a high-trust environment can largely hinder hacker success. ... For a pervasive selective trusted ecosystem, an organization requires something beyond trusted user IDs. A hacker can compromise a user’s device and steal the trusted user ID, making identity-based trust inadequate. A trust-verified device assures that the device is secure and can be trusted. But then again, a hacker stealing a user’s identity and password can also fake the user’s device. Confirming the device’s identity—whether it is or it isn’t the same device—hence becomes necessary. The best way to ensure the device is secure and trustworthy is to employ the device identity that is designed by its manufacturer and programmed into its TPM or Secure Enclave chip. ... Trusted actions are critical in ensuring a secure and pervasive trust environment. Different actions require different levels of authentication, generating different levels of trust, which the application vendor or the service provider has already defined. An action considered high risk would require stronger authentication, also known as dynamic authentication.


AWS clamping down on cloud capacity swapping; here’s what IT buyers need to know

For enterprises that sourced discounted cloud resources through a broker or value-added reseller (VAR), the arbitrage window shuts, Brunkard noted. Enterprises should expect a “modest price bump” on steady‑state workloads and a “brief scramble” to unwind pooled commitments. ... On the other hand, companies that buy their own RIs or SPs, or negotiate volume deals through AWS’s Enterprise Discount Program (EDP), shouldn’t be impacted, he said. Nothing changes except that pricing is now baselined. To get ahead of the change, organizations should audit their exposure and ask their managed service providers (MSPs) what commitments are pooled and when they renew, Brunkard advised. ... Ultimately, enterprises that have relied on vendor flexibility to manage overcommitment could face hits to gross margins, budget overruns, and a spike in “finance-engineering misalignment,” Barrow said. Those whose vendor models are based on RI and SP reallocation tactics will see their risk profile “changed overnight,” he said. New commitments will now essentially be non-cancellable financial obligations, and if cloud usage dips or pivots, they will be exposed. Many vendors won’t be able to offer protection as they have in the past.


The new C-Suite ally: Generative AI

While traditional GenAI applications focus on structured datasets, a significant frontier remains largely untapped — the vast swathes of unstructured "dark data" sitting in contracts, credit memos, regulatory reports, and risk assessments. Aashish Mehta, Founder and CEO of nRoad, emphasizes this critical gap.
"Most strategic decisions rely on data, but the reality is that a lot of that data sits in unstructured formats," he explained. nRoad’s platform, CONVUS, addresses this by transforming unstructured content into structured, contextual insights. ... Beyond risk management, OpsGPT automates time-intensive compliance tasks, offers multilingual capabilities, and eliminates the need for coding through intuitive design. Importantly, Broadridge has embedded a robust governance framework around all AI initiatives, ensuring security, regulatory compliance, and transparency. Trustworthiness is central to Broadridge’s approach. "We adopt a multi-layered governance framework grounded in data protection, informed consent, model accuracy, and regulatory compliance," Seshagiri explained. ... Despite the enthusiasm, CxOs remain cautious about overreliance on GenAI outputs. Concerns around model bias, data hallucination, and explainability persist. Many leaders are putting guardrails in place: enforcing human-in-the-loop systems, regular model audits, and ethical AI use policies.


Building a Proactive Defence Through Industry Collaboration

Trusted collaboration, whether through Information Sharing and Analysis Centres (ISACs), government agencies, or private-sector partnerships, is a highly effective way to enhance the defensive posture of all participating organisations. For this to work, however, organisations will need to establish operationally secure real-time communication channels that support the rapid sharing of threat and defence intelligence. In parallel, the community will also need to establish processes to enable them to efficiently disseminate indicators of compromise (IoCs) and tactics, techniques and procedures (TTPs), backed up with best practice information and incident reports. These collective defence communities can also leverage the centralised cyber fusion centre model that brings together all relevant security functions – threat intelligence, security automation, threat response, security orchestration and incident response – in a truly cohesive way. Providing an integrated sharing platform for exchanging information among multiple security functions, today’s next-generation cyber fusion centres enable organisations to leverage threat intelligence, identify threats in real-time, and take advantage of automated intelligence sharing within and beyond organisational boundaries. 


3 Powerful Ways AI is Supercharging Cloud Threat Detection

AI’s strength lies in pattern recognition across vast datasets. By analysing historical and real-time data, AI can differentiate between benign anomalies and true threats, improving the signal-to-noise ratio for security teams. This means fewer false positives and more confidence when an alert does sound. ... When a security incidents strike, every second counts. Historically, responding to an incident involves significant human effort – analysts must comb through alerts, correlate logs, identify the root cause, and manually contain the threat. This approach is slow, prone to errors, and doesn’t scale well. It’s not uncommon for incident investigations to stretch hours or days when done manually. Meanwhile, the damage (data theft, service disruption) continues to accrue. Human responders also face cognitive overloads during crises, juggling tasks like notifying stakeholders, documenting events, and actually fixing the problem. ... It’s important to note that AI isn’t about eliminating the need for human experts but rather augmenting their capabilities. By taking over initial investigation steps and mundane tasks, AI frees up human analysts to focus on strategic decision-making and complex threats. Security teams can then spend time on thorough analysis of significant incidents, threat hunting, and improving security posture, instead of constant firefighting. 


The hidden gaps in your asset inventory, and how to close them

The biggest blind spot isn’t a specific asset. It is trusting that what’s on paper is actually live and in production. Many organizations often solely focus on known assets within their documented environments, but this can create a false sense of security. Blind spots are not always the result of malicious intent, but rather of decentralized decision-making, forgotten infrastructure, or evolving technology that hasn’t been brought under central control. External applications, legacy technologies and abandoned cloud infrastructure, such as temporary test environments, may remain vulnerable long after their intended use. These assets pose a risk, particularly when they are unintentionally exposed due to misconfiguration or overly broad permissions. Third-party and supply chain integrations present another layer of complexity.  ... Traditional discovery often misses anything that doesn’t leave a clear, traceable footprint inside the network perimeter. That includes subdomains spun up during campaigns or product launches; public-facing APIs without formal registration or change control; third-party login portals or assets tied to your brand and code repositories, or misconfigured services exposed via DNS. These assets live on the edge, connected to the organization but not owned in a traditional sense. 

Daily Tech Digest - May 21, 2025


Quote for the day:

"A true dreamer is one who knows how to navigate in the dark." -- John Paul Warren


How Microsoft wants AI agents to use your PC for you

Microsoft’s concept revolves around the Model Context Protocol (MCP), which was created by Anthropic (the company behind the Claude chatbot) last year. That’s an open-source protocol that AI apps can use to talk to other apps and web services. Soon, Microsoft says, you’ll be able to let a chatbot — or “AI agent” — connect to apps running on your PC and manipulate them on your behalf. ... Compared to what Microsoft is proposing, past “agentic” AI solutions that promised to use your computer for you aren’t quite as compelling. They’ve relied on looking at your computer’s screen and using that input to determine what to click and type. This new setup, in contrast, is neat — if it works as promised — because it lets an AI chatbot interact directly with any old traditional Windows PC app. But the Model Context Protocol solution is even more advanced and streamlined than that. Rather than a chatbot having to put together a Spotify playlist by dragging and dropping songs in the old-fashioned way, it would give the AI the ability to give instructions to the Spotify app in a more simplified form. On a more technical level, Microsoft will let application developers make their applications function as MCP servers — a fancy way of saying they’d act like a bridge between the AI models and the tasks they perform. 


How vulnerable are undersea cables?

The only way to effectively protect a cable against sabotage is to bury the entire cable, says Liwång, which is not economically justifiable. In the Baltic Sea, it is easier and more sensible to repair the cables when they break, and it is more important to lay more cables than to try to protect a few.
Burying all transoceanic cables is hardly feasible in practice either. ... “Cable breaks are relatively common even under normal circumstances. In terrestrial networks, they can be caused by various factors, such as excavators working near the fiber installation and accidentally cutting it. In submarine cables, cuts can occur, for example due to irresponsible use of anchors, as we have seen in recent reports,” says Furdek Prekratic. Network operators ensure that individual cable breaks do not lead to widespread disruptions, she notes: “Optical fiber networks rely on two main mechanisms to handle such events without causing a noticeable disruption to public transport. The first is called protection. The moment an optical connection is established over a physical path between two endpoints, resources are also allocated to another connection that takes a completely different path between the same endpoints. If a failure occurs on any link along the primary path, the transmission quickly switches to the secondary path. The second mechanism is called failover. Here, the secondary path is not reserved in advance, but is determined after the primary path has suffered a failure.” 


Driving business growth through effective productivity strategies

In times of economic uncertainty, it is to be expected that businesses grow more cautious with their spending. However, this can result in missed opportunities to improve productivity in favour of cost reductions. While cutting costs can seem an attractive option in light of economic doubts, it is merely a short-term solution. When businesses hold back from knee-jerk reactions and maintain a focus on sustainable productivity gains, they will find themselves reaping rewards in the long term. Strategic investments in technology solutions are essential to support businesses in driving their productivity strategies forward. With new technology constantly being introduced, there are a lot of options for business decision makers to consider. Most obviously, there are technology features in our ERP systems, and in our project management and collaboration tools, that can be used to facilitate significant flexibility or performance advantages compared to legacy approaches and processes. ... While technology is a vital part of any innovative productivity model, it’s just one piece of the puzzle. It is no use installing modern technology if internal processes remain outdated. Businesses must also look to weed out inefficient practices to improve and streamline resource management. 


Synthetic data’s fine line between reward and disaster

Generating large volumes of training data on demand is appealing compared to slow, expensive gathering of real-world data, which can be fraught with privacy concerns, or just not available. Synthetic data ought to help preserve privacy, speed up development, and be more cost effective for long-tail scenarios enterprises couldn’t otherwise tackle, she adds. It can even be used for controlled experimentation, assuming you can make it accurate enough. Purpose-built data is ideal for scenario planning and running intelligent simulations, and synthetic data detailed enough to cover entire scenarios could predict future behavior of assets, processes, and customers, which would be invaluable for business planning. ... Created properly, synthetic data mimics statistical properties and patterns of real-world data without containing actual records from the original dataset, says Jarrod Vawdrey, field chief data scientist at Domino Data Lab. And David Cox, VP of AI Models at IBM Research suggests viewing it as amplifying rather than creating data. “Real data can be extremely expensive to produce, but if you have a little bit of it, you can multiply it,” he says. “In some cases, you can make synthetic data that’s much higher quality than the original. The real data is a sample. It doesn’t cover all the different variations and permutations you might encounter in the real world.”


AI Interventions to Reduce Cycle Time in Legacy Modernization

As the software becomes difficult to change, businesses may choose to tolerate conceptual drift or compensate for it through their operations. When the difficulty of modifying the software poses a significant enough business risk, a legacy modernization effort is undertaken. Legacy modernization efforts showcase the problem of concept recovery. In these circumstances, recovering a software system’s underlying concept is the labor-intensive bottleneck step to any change. Without it, the business risks a failed modernization or losing customers that depend on unknown or under-considered functionality. ... The goal of a software modernization’s design phase is to perform enough validation of the approach to be able to start planning and development while minimizing the amount of rework that could result due to missed information. Traditionally, substantial lead time is spent in the design phase inspecting legacy source code, producing a target architecture, and collecting business requirements. These activities are time-intensive, mutually interdependent, and usually the bottleneck step in modernization. While exploring how to use LLMs for concept recovery, we encountered three challenges to effectively serving teams performing legacy modernizations: which context was needed and how to obtain it, how to organize context so humans and LLMs can both make use of it, and how to support iterative improvement of requirements documents. 


OWASP proposes a way for enterprises to automatically identify AI agents

“The confusion about ANS versus protocols like MCP, A2A, ACP, and Microsoft Entra is understandable, but there’s an important distinction to make: ANS is a discovery service, not a communication protocol,” Narajala said. “MCP, A2A and ACP define how agents talk to each other once connected, like HTTP for web. ANS defines how agents find and verify each other before communication, like DNS for web. Microsoft Entra provides identity services, but primarily within Microsoft’s ecosystem.” ... “We’re fast approaching the point where the need for a standard to identify AI agents becomes painfully obvious. Right now, it’s a mess. Companies are spinning up agents left and right, with no trusted way to know what they are, what they do, or who built them,” Tvrdik said. “The Wild West might feel exciting, but we all know how most of those stories end. And it’s not secure.” As for ANS, he said. “it makes sense in theory. Treat agents like domains. Give them names, credentials, and a way to verify who’s talking to what. That helps with security, sure, but also with keeping things organized. Without it, we’re heading into chaos.” But Tvrdik stressed that the deployment mechanisms will ultimately determine if ANS works.


Driving DevOps With Smart, Scalable Testing

Testing apps manually isn’t easy and consumes a lot of time and money. Testing complex ones with frequent releases requires an enormous number of human hours when attempted manually. This will affect the release cycle, results will take longer to appear, and if shown to be a failure, you’ll need to conduct another round of testing. What’s more, the chances of doing it correctly, repeatedly and without any human error, are highly unlikely. Those factors have driven the development of automation throughout all phases of the testing process, ranging from infrastructure builds to actual testing of code and applications. As for who should write which tests, as a general rule of thumb, it’s a task best-suited to software engineers. They should create unit and integration tests as well as UI e2e tests. QA analysts should also be tasked with writing UI E2E tests scenarios together with individual product owners. QA teams collaborating with business owners enhance product quality by aligning testing scenarios with real-world user experiences and business objectives. ... AWS CodePipeline can provide completely managed continuous delivery that creates pipelines, orchestrates and updates infrastructure and apps. It also works well with other crucial AWS DevOps services, while integrating with third-party action providers like Jenkins and Github. 


Bridging the Digital Divide: Understanding APIs

While both Event-Driven Architecture (EDA) and Data-Driven Architecture (DDA) are crucial for modern enterprises, they serve distinct purposes, operate on different core principles, and manifest through different architectural characteristics. Understanding these differences is key for enterprise architects to effectively leverage their individual strengths and potential synergies. While EDA is often highly operational and tactical, facilitating immediate responses to specific triggers, DDA can span both operational and strategic domains. A key differentiator between the two lies in the “granularity of trigger.” EDA is typically triggered by fine-grained, individual events—a single mouse click, a specific sensor reading, a new message arrival. Each event is a distinct signal that can initiate a process. DDA, on the other hand, often initiates its processes or derives its triggers from aggregated data, identified patterns, or the outcomes of analytical models that have processed numerous data points. For example, an analytical process in DDA might be triggered by the availability of a complete daily sales dataset, or an alert might be generated when a predictive model identifies an anomaly based on a complex evaluation of multiple data streams over time. This distinction in trigger granularity directly influences the design of processing logic, the selection of underlying technologies, and the expected immediacy and nature of the system’s response.


What good threat intelligence looks like in practice

The biggest shortcoming is often in the last mile, connecting intelligence to real-time detection, response, and risk mitigation. Another challenge is organizational silos. In many environments, the CTI team operates separately from SecOps, incident response, or threat hunting teams. Without seamless collaboration between those functions, threat intelligence remains a standalone capability rather than a force multiplier. This is often where threat intelligence teams can be challenged to demonstrate value into security operations. ... Rather than picking one over the other, CISOs should focus on blending these sources and correlating them with internal telemetry. The goal is to reduce noise, enhance relevance, and produce enriched insights that reflect the organization’s actual threat surface. Feed selection should also consider integration capabilities — intelligence is only as useful as the systems and people that can act on it. When threat intelligence is operationalized, a clear picture can be formed from the variety of available threat feeds. ... The threat intel team should be seen not as another security function, but as a strategic partner in risk reduction and decision support. CISOs can encourage cross-functional alignment by embedding CTI into security operations workflows, incident response playbooks, risk registers, and reporting frameworks.


4 ways to safeguard CISO communications from legal liabilities

“Words matter incredibly in any legal proceeding,” Brown agreed. “The first thing that will happen will be discovery. And in discovery, they will collect all emails, all Teams, all Slacks, all communication mechanisms, and then run queries against that information.” Speaking with professionalism is not only a good practice in building an effective cybersecurity program, but it can go a long way to warding off legal and regulatory repercussions, according to Scott Jones, senior counsel at Johnson & Johnson. “The seriousness and the impact of your words and all other aspects of how you conduct yourself as a security professional can have impacts not only on substantive cybersecurity, but also what harms might befall your company either through an enforcement action or litigation,” he said. ... CISOs also need to pay attention to what they say based on the medium in which they are communicating. Pay attention to “how we communicate, who we’re communicating with, what platforms we’re communicating on, and whether it’s oral or written,” Angela Mauceri, corporate director and assistant general counsel for cyber and privacy at Northrop Grumman, said at RSA. “There’s a lasting effect to written communications.” She added, “To that point, you need to understand the data governance and, more importantly, the data retention policy of those electronic communication platforms, whether it exists for 60 days, 90 days, or six months.”

Daily Tech Digest - May 20, 2025


Quote for the day:

"Success is liking yourself, liking what you do, and liking how you do it." -- Maya Angelou


Scalability and Flexibility: Every Software Architect's Challenge

Building successful business applications involves addressing practical challenges and strategic trade-offs. Cloud computing offers flexibility, but poor resource management can lead to ballooning costs. Organizations often face dilemmas when weighing feature richness against budget constraints. Engaging stakeholders early in the development process ensures alignment with priorities. ... Right-sizing cloud resources is essential for software architects, who can leverage tools to monitor usage and scale resources automatically based on demand. Serverless computing models, which charge only for execution time, are ideal for unpredictable workloads and seasonal fluctuations, ensuring organizations only use what they need when needed. .. The next decade will usher in unprecedented opportunities for innovation in business applications. Regularly reviewing market trends and user feedback ensures applications remain relevant. Features like voice commands and advanced analytics are becoming standard as users demand more intuitive interfaces, boosting overall performance and creating new avenues for innovation. Software architects can stay alert and flexible by regularly assessing application performance, user feedback, and market trends to guarantee that systems remain relevant.


Navigating the Future of Network Security with Secure Access Service Edge (SASE)

As businesses expand their digital footprint, cyber attackers increasingly target unsecured cloud resources and remote endpoints. Traditional perimeter-based network and security architectures are not capable of protecting distributed environments. Therefore, organizations must adopt a holistic, future-proof network and cybersecurity architecture to succeed in this rapidly changing business landscape. The ChallengesPerimeter-based security revolves around defending the network’s boundary. It assumes that anyone who has gained access to the network is trusted and that everything outside the network is a potential threat. While this model worked well when applications, data, and users were contained within corporate walls, it is not adequate in a world where cloud applications and hybrid work are the norm. ... ... SASE is an architecture comprising a broad spectrum of technologies, including Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG), Firewall as a Service (FWaaS), Cloud Access Security Brocker (CASB), Data Loss Prevention (DLP), and Software-Defined Wide Area Networking (SD-WAN). Everything is embodied into a single, cloud-native platform that provides advanced cyber protection and seamless network performance for highly distributed applications and users.


Whether AI is a bubble or revolution, how does software survive?

Bubble or not, AI has certainly made some waves, and everyone is looking to find the right strategy. It’s already caused a great deal of disruption—good and bad—among software companies large and small. The speed at which the technology has moved from its coming out party, has been stunning; costs have dropped, hardware and software have improved, and the mediocre version of many jobs can be replicated in a chat window. It’s only going to continue. “AI is positioned to continuously disrupt itself, said McConnell. “It's going to be a constant disruption. If that's true, then all of the dollars going to companies today are at risk because those companies may be disrupted by some new technology that's just around the corner.” First up on the list of disruption targets: startups. If you’re looking to get from zero to market fit, you don’t need to build the same kind of team like you used to. “Think about the ratios between how many engineers there are to salespeople,” said Tunguz. “We knew what those were for 10 or 15 years, and now none of those ratios actually hold anymore. If we are really are in a position that a single person can have the productivity of 25, management teams look very different. Hiring looks extremely different.” That’s not to say there won’t be a need for real human coders. We’ve seen how badly the vibe coding entrepreneurs get dunked on when they put their shoddy apps in front of a merciless internet.


The AI security gap no one sees—until it’s too late

The most serious—and least visible—gaps stem from the “Jenga-style” layering of managed AI services, where cloud providers stack one service on another and ship them with user-friendly but overly permissive defaults. Tenable’s 2025 Cloud AI Risk Report shows that 77 percent of organisations running Google Cloud’s Vertex AI Workbench leave the notebook’s default Compute Engine service account untouched; that account is an all-powerful identity which, if hijacked, lets an attacker reach every other dependent service. ... CIOs should treat every dataset in the AI pipeline as a high-value asset. Begin with automated discovery and classification across all clouds so you know exactly where proprietary corpora or customer PII live, then encrypt them in transit and at rest in private, version-controlled buckets. Enforce least-privilege access through short-lived service-account tokens and just-in-time elevation, and isolate training workloads on segmented networks that cannot reach production stores or the public internet. Feed telemetry from storage, IAM and workload layers into a Cloud-Native Application Protection Platform that includes Data Security Posture Management; this continuously flags exposed buckets, over-privileged identities and vulnerable compute images, and pushes fixes into CI/CD pipelines before data can leak.


5 questions defining the CIO agenda today

CIOs along with their executive colleagues and board members “realize that hacks and disruptions by bad actors are an inevitability,” SIM’s Taylor says. That realization has shifted security programs from being mostly defensive measures to ones that continuously evolve the organization’s ability to identify breaches quickly, respond rapidly, and return to operations as fast as possible, Taylor says. The goal today is ensuring resiliency — even as the bad actors and their attack strategies evolve. ... Building a tech stack that can grow and retract with business needs, and that can evolve quickly to capitalize on an ever-shifting technology landscape, is no easy feat, Phelps and other IT leaders readily admit. “In modernizing, it’s such a moving target, because once you got it modernized, something new can come out that’s better and more automated. The entire infrastructure is evolving so quickly,” says Diane Gutiw ... “CIOs should be asking, ‘How do I change or adapt what I do now to be able to manage a hybrid workforce? What does the future of work look like? How do I manage that in a secure, responsible way and still take advantage of the efficiencies? And how do I let my staff be innovative without violating regulation?’” Gutiw says, noting that today’s managers “are the last generation of people who will only manage people.”


Microsoft just taught its AI agents to talk to each other—and it could transform how we work

Microsoft is giving organizations more flexibility with their AI models by enabling them to bring custom models from Azure AI Foundry into Copilot Studio. This includes access to over 1,900 models, including the latest from OpenAI GPT-4.1, Llama, and DeepSeek. “Start with off-the-shelf models because they’re already fantastic and continuously improving,” Smith said. “Companies typically choose to fine-tune these models when they need to incorporate specific domain language, unique use cases, historical data, or customer requirements. This customization ultimately drives either greater efficiency or improved accuracy.” The company is also adding a code interpreter feature that brings Python capabilities to Copilot Studio agents, enabling data analysis, visualization, and complex calculations without leaving the Copilot Studio environment. Smith highlighted financial applications as a particular strength: “In financial analysis and services, we’ve seen a remarkable breakthrough over the past six months,” Smith said. “Deep reasoning models, powered by reinforcement learning, can effectively self-verify any process that produces quantifiable outputs.” He added that these capabilities excel at “complex financial analysis where users need to generate code for creating graphs, producing specific outputs, or conducting detailed financial assessments.”


Culture fit is a lie: It’s time we prioritised culture add

The idea of culture fit originated with the noble intent of fostering team cohesion. But over time, it has become an excuse to hire people who are familiar, comfortable and easy to manage. In doing so, companies inadvertently create echo chambers—workforces that lack diverse perspectives, struggle to challenge the status quo and fail to innovate. Ankur Sharma, Co-Founder & Head of People at Rebel Foods, understands this well. Speaking at the TechHR Pulse Mumbai 2025 conference, Sharma explained how Rebel Foods moved beyond hiring for cultural likeness. “We are not building a family; we are building a winning team,” he said, emphasising that what truly matters is competency, accountability and adaptability. The problem with culture fit is not just about homogeneity—it’s about stagnation. When teams are made up of individuals who think alike, they lose the ability to see challenges from multiple angles. Companies that prioritise cultural uniformity often struggle to pivot in response to industry shifts. ... Leading organisations are abandoning the notion of culture fit and shifting towards ‘culture add’—hiring employees who bring fresh ideas, challenge existing norms, and contribute new perspectives. Instead of asking, ‘Will this person fit in?’ Hiring managers are asking, ‘What unique value does this person bring?’


Closing security gaps in multi-cloud and SaaS environments

Many organizations are underestimating the risk — especially as the nature of attacks evolves. Traditional behavioral detection methods often fall short in spotting modern threats such as account hijacking, phishing, ransomware, data exfiltration, and denial of service attacks. Detecting these types of attacks require correlation and traceability across different sources including runtime events with eBPF, cloud audit logs, and APIs across both cloud infrastructure and SaaS. ... As attackers adopt stealthier tactics — from GenAI-generated malware to supply chain compromises — traditional signature- and rule-based methods fall short. ... A unified cloud and SaaS security strategy means moving away from treating infrastructure, applications, and SaaS as isolated security domains. Instead, it focuses on delivering seamless visibility, risk prioritization, and automated response across the full spectrum of enterprise environments — from legacy on-premises to dynamic cloud workloads to business-critical SaaS platforms and applications. ... Native CSP and SaaS telemetry is essential, but it’s not enough on its own. Continuous inventory and monitoring across identity, network, compute, and AI is critical — especially to detect misconfigurations and drift. 


AI-Driven Test Automation Techniques for Multimodal Systems

Traditional testing frameworks struggle to meet these demands, particularly as multimodal systems continuously evolve through real-time updates and training. Consequently, AI-powered test automation has emerged as a promising paradigm to ensure scalable and reliable testing processes for multimodal systems. ... Natural Language Processing (NLP)-powered AI tools will understand and define the requirements in a more elaborate and defined structure. This will detect any ambiguity and gaps in requirements. For example, the “System should display message quickly” AI tool will identify the need for a precise definition for the word “quickly.” It looks simple, but if missed, it could lead to great performance issues in production. ... Based on AI-generated requirements and business scenarios, AI-based tools can generate test strategy documents by identifying resources, constraints, and dependencies between systems. All this can be achieved with NLP AI tools ... AI-driven test automation solutions can improve shift-left testing even more by generating automated test scripts faster. Testers can run automation at an early stage when the code is ready to test. AI tools like Chat GPT 4.0 provide script code in any language, like Java or Python, based on simple text input. This uses the NLP (Natural Language Processing) AI model to generate code for automation scripts.


IGA: What Is It, and How Can SMBs Use It?

The first step in a total IGA strategy has nothing to do with software. It actually starts with IT and business leaders determining what the rules of identity governance and behavior should be. The benefit of having a smaller organization is that there are not quite as many stakeholders as in an enterprise. The challenge, of course, is that people, time and resources are limited. IT may have to assume the role of facilitator and earn buy-in. Nevertheless, this is a worthwhile exercise, as it can help establish a platform for secure growth in the future. And again, for SMBs in regulatory-heavy industries — especially finance, healthcare and government contractors — IGA should be a top priority. ... To do this, CIOs should first procure support from key stakeholders by meeting with them individually to explain the need for IGA as an overarching security technology and policy platform for digital security. In these discussions, CIOs can present the long-term benefits of an IGA program that can streamline user identity verification across services while easing audits and automating compliance. ... A strategic roadmap for IGA should involve minimally disruptive business and user adoption and quick technology implementation. One way to do this is to create a phased implementation approach that tackles the most mission-critical and sensitive systems first before extending to other areas of IT.

Daily Tech Digest - May 19, 2025


Quote for the day:

"Leadership is liberating people to do what is required of them in the most effective and humane way possible." -- Max DePree


Adopting agentic AI? Build AI fluency, redesign workflows, don’t neglect supervision

AI upskilling is still majorly under-prioritized across organizations. Did you know that less than one-third of companies have trained even a quarter of their staff to use AI? How do leaders expect employees to feel empowered to use AI if education isn’t presented as the priority? Maintaining a nimble and knowledgeable workforce is critical, fostering a culture that embraces technological change. Team collaboration in this sense could take the form of regular training about agentic AI, highlighting its strengths and weaknesses and focusing on successful human-AI collaborations. For more established companies, role-based training courses could successfully show employees in different capacities and roles to use generative AI appropriately. ... Although gen AI will not substantially affect organizations’ workforce sizes in the short-term, we should still expect an evolution of role titles and responsibilities. For example, from service operations and product development to AI ethics and AI model validation positions. For this shift to successfully happen, executive-level buy-in is paramount. Senior leaders need a clearly-defined organization-wide strategy, including a dedicated team to drive gen AI adoption. We’ve seen that when senior leaders delegate AI integration solely to IT or digital technology teams, the business context can be neglected. 


Half of tech execs are ready to let AI take the wheel

“AI is not just an incremental change from digital business. AI is a step change in how business and society work,” he said. “A significant implication is that, if savviness across the C-suite is not rapidly improved, competitiveness will suffer, and corporate survival will be at stake.” CEOs perceived even the CIO, chief information security officer (CISO), and chief data officer (CDO) as lacking AI savviness. Respondents said the top two factors limiting AI’s deployment and use are the inability to hire adequate numbers of skilled people and an inability to calculate value or outcomes. “CEOs have shifted their view of AI from just a tool to a transformative way of working,” said Jennifer Carter, a principal analyst at Gartner. “This change has highlighted the importance of upskilling. As leaders recognize AI’s potential and its impact on their organizations, they understand that success isn’t just about hiring new talent. Instead, it’s about equipping their current employees with the skills needed to seamlessly incorporate AI into everyday tasks.” This focus on upskilling is a strategic response to AI’s evolving role in business, ensuring that the entire organization can adapt and thrive in this new paradigm. Sixty-six percent of CEOs said their business models are not fit for AI purposes, according to Gartner’s survey. 


What comes after Stack Overflow?

The most obvious option is the one that is already happening whether we like it or not: LLMs are the new Q&A platforms. In the immediate term, ChatGPT and similar tools have become the go-to source for many. They provide the convenience of natural language queries with immediate answers. It’s possible we’ll see official “Stack Overflow GPT” bots or domain-specific LLMs trained on curated programming knowledge. In fact, Stack Overflow’s own team has been experimenting with using AI to draft preliminary answers to questions, while linking back to the original human posts for context. This kind of hybrid approach leverages AI’s speed but still draws on the library of verified solutions the community has built over years. ... Additionally, it’s still possible that the social Q&A sites will save the experience through data partnerships. For example, Stack Overflow, Reddit, and others have moved toward paid licensing agreements for their data. The idea is to both control how AI companies use community content and to funnel some value back to the content creators. We may see new incentives for experienced developers to contribute knowledge. One proposal is that if an AI answer draws from your Stack Overflow post, you could earn reputation points or even a cut of the licensing fee.


8 security risks overlooked in the rush to implement AI

AI models are frequently deployed as part of larger application pipelines, such as through APIs, plugins, or retrieval-augmented generation (RAG) architectures. “Insufficient testing at this level can lead to insecure handling of model inputs and outputs, injection pathways through serialized data formats, and privilege escalation within the hosting environment,” Mindgard’s Garraghan says. “These integration points are frequently overlooked in conventional AppSec [application security] workflows.” ... AI systems may exhibit emergent behaviors only during deployment, especially when operating under dynamic input conditions or interacting with other services. “Vulnerabilities such as logic corruption, context overflow, or output reflection often appear only during runtime and require operational red-teaming or live traffic simulation to detect,” according to Garraghan. ... The rush to implement AI puts CISOs in a stressful bind, but James Lei, chief operating officer at application security testing firm Sparrow, advises CISOs to push back on the unchecked enthusiasm to introduce fundamental security practices into the deployment process. “To reduce these risks, organizations should be testing AI tools in the same way they would any high-risk software, running simulated attacks, checking for misuse scenarios, validating input and output flows, and ensuring that any data processed is appropriately protected,” he says.


A Brief History of Data Stewardship

Today, in leading-edge organizations, data stewardship is at the heart of data-driven transformation initiatives, such as DataOps, AI governance, and improved metadata management, which have evolved data stewardship beyond traditional data quality control. Data stewards can be found in every industry and in organizations of any size. Modern data stewards interact with:Automated data quality tools that identify and resolve data issues at scale. Data catalogs and data lineage applications that organize business and technical metadata and provide searchable inventories of data assets. AI/ML models that require extensive monitoring to ensure they are trained on unbiased, accurate datasets The scope of data stewardship has expanded to include ethical considerations, particularly concerning data privacy, algorithmic bias, and responsible AI. Data stewards are increasingly seen as the conscience of data within organizations, championing not only compliance but also fairness, transparency, and accountability. New organizational models, such as federated data stewardship – in which data stewardship responsibilities are distributed across teams – can promote improved collaboration and enable scaling data stewardship efforts alongside agile and decentralized business units.


Introducing strands agents, an Open Source AI agents SDK

In Strands’ model-driven approach, tools are key to how you customize the behavior of your agents. For example, tools can retrieve relevant documents from a knowledge base, call APIs, run Python logic, or just simply return a static string that contains additional model instructions. Tools also help you achieve complex use cases in a model-driven approach, such as with these Strands Agents example pre-built tools: Retrieve tool: This tool implements semantic search using Amazon Bedrock Knowledge Bases. Beyond retrieving documents, the retrieve tool can also help the model plan and reason by retrieving other tools using semantic search. For example, one internal agent at AWS has over 6,000 tools to select from! Models today aren’t capable of accurately selecting from quite that many tools. Instead of describing all 6,000 tools to the model, the agent uses semantic search to find the most relevant tools for the current task and describes only those tools to the model. ... Thinking tool: This tool prompts the model to do deep analytical thinking through multiple cycles, enabling sophisticated thought processing and self-reflection as part of the agent. In the model-driven approach, modeling thinking as a tool enables the model to reason about if and when a task needs deep analysis.


AI hallucinations and their risk to cybersecurity operations

“AI hallucinations are an expected byproduct of probabilistic models,” explains Chetan Conikee, CTO at Qwiet AI, emphasizing that the focus shouldn’t be on eliminating them entirely but on minimizing operational disruption. “The CISO’s priority should be limiting operational impact through design, monitoring, and policy.” That starts with intentional architecture. Conikee recommends implementing a structured trust framework around AI systems, an approach that includes practical middleware to vet inputs and outputs through deterministic checks and domain-specific filters. This step ensures that models don’t operate in isolation but within clearly defined bounds that reflect enterprise needs and security postures. Traceability is another cornerstone. “All AI-generated responses must carry metadata including source context, model version, prompt structure, and timestamp,” Conikee notes. Such metadata enables faster audits and root cause analysis when inaccuracies occur, a critical safeguard when AI output is integrated into business operations or customer-facing tools. For enterprises deploying LLMs, Conikee advises steering clear of open-ended generation unless necessary. Instead, organizations should lean on RAG grounded in curated, internal knowledge bases. 


Can Data Governance Set Us Free?

Internally, an important lesson has been to view data management as a federated service. This entails a shift from data management being a ‘governance’ activity – something people did because we pushed them to do it – to a service-driven activity – something people do because they want to. We worked with our User-Centred Service Design team to agree an underpinning set of principles to get buy-in across the organisation on the purpose of, and facets to, good data management. The overarching principle is that data are valuable, shared assets. We can maximise value by making data widely available, easy to use and understand, whilst ensuring data are protected and not misused. Bringing the service to life means getting four things right: First, a proportionate vision for service maturity. All data need to have basic information registered. But where data are widely used or feed into critical processes, it becomes instrumental to dedicate resources to supporting ease of access, use and quality for our users. We are increasingly tending toward managing these assets centrally. Second, the assignment of clear responsibilities across the federation. We are working through which datasets will be managed centrally and which will be managed by teams across the Bank that are expert in them. 


To Fix Platform Engineering, Build What Users Actually Want

If it takes developers and engineers months to become productive, your platform isn’t helping — it’s hindering. A great platform should be as frictionless and intuitive as a consumer-grade product. Internal platforms must empower instant productivity. If your platform offers compute, it shouldn’t just be raw power — it should be integrated, easy to adopt, and evolve seamlessly in the background. Let’s not create unnecessary cognitive load. Developers are adapting quickly to generative AI and new tech. The real value lies in solving real, tangible problems — not fictional ones. This brings us to a deeper look at what’s not working — and why so many efforts fail despite the best intentions. ... Most enterprises are hybrid by nature — legacy systems, siloed processes and complex workflows are the norm. The real challenge isn’t just technological; it’s integrating platform engineering into these messy realities without making it worse. Today, no single product solves this end-to-end. We’re still lacking a holistic solution that manages internal workflows, governance and hybrid complexity without adding friction. What’s needed is a shift in mindset — from assembling open source tools to building integrated, adoption-focused, business-aligned platforms. And that shift must be guided by clear trends in tooling and team structure.


Liquid cooling becoming essential as AI servers proliferate

“A lot of the carbon emissions of the data center happen in the build of it, in laying down the slab,” says Josh Claman, CEO at Accelsius, a liquid cooling company. “I hope that companies won’t just throw all that away and start over.” In addition to the environmental benefits, upgrading an air-cooled data center into a hybrid, liquid and air system has other advantages, says Herb Hogue, CTO at Myriad360, a global systems integrator. Liquid cooling is more effective than air alone, he says, and when used in combination with air cooling, the temperature of the air cooling systems can be increased slightly without impacting performance. “This reduces overall energy consumption and lowers utility bills,” he says. Liquid cooling also allows for not just lower but also more consistent operating temperatures, Hogue says. That leads to less wear on IT equipment, and, without fans, fewer moving parts per server. The downsides, however, include the cost of installing the hybrid system and needed specialized operations and maintenance skills. There might also be space constraints and other challenges. Still, it can be a smart approach for handling high-density server setups, he says. And there’s one more potential benefit, says Gerald Kleyn, vice president of customer solutions for HPC and AI at Hewlett Packard Enterprise. 

Daily Tech Digest - May 18, 2025


Quote for the day:

“We are all failures - at least the best of us are.” -- J.M. Barrie


Extra Qubits Slash Measurement Time Without Losing Precision

Fast and accurate quantum measurements are essential for future quantum devices. However, quantum systems are extremely fragile; even small disturbances during measurement can cause significant errors. Until now, scientists faced a fundamental trade-off: they could either improve the accuracy of quantum measurements or make them faster, but not both at once. Now, a team of quantum physicists, led by the University of Bristol and published in Physical Review Letters, has found a way to break this trade-off. The team’s approach involves using additional qubits, the fundamental units of information in quantum computing, to “trade space for time.” Unlike the simple binary bits in classical computers, qubits can exist in multiple states simultaneously, a phenomenon known as superposition. In quantum computing, measuring a qubit typically requires probing it for a relatively long time to achieve a high level of certainty. ... Remarkably, the team’s process allows the quality of a measurement to be maintained, or even enhanced, even as it is sped up. The method could be applicable to a broad range of leading quantum hardware platforms. As the global race to build the highest-performance quantum technologies continues, the scheme has the potential to become a standard part of the quantum read-out process.


The leadership legacy: How family shapes the leaders we become

We’ve built leadership around performance metrics, dashboards and influence. Yet the traits that truly sustain teams — empathy, accountability, consistency — are often born not in corporate training but in the everyday rituals of family life. On this International Day of Families, it’s time to reevaluate leadership models that have long been defined by clarity, charisma and control and define it with something deeper like care, connection and community. ... Here are five principles drawn from healthy family systems that can reframe leadership models: Consistency over chaos: Families thrive on routines and reliability. Leaders who bring emotional consistency, set clear expectations and avoid reactionary decisions foster psychological safety. Presence over performance: In families, presence often matters more than fixing the problem. Leaders who truly listen, offer time and engage with empathy build trust that performance alone cannot buy. Accountability with care: Families call out mistakes, but with the intent to support, not shame. Leaders who combine feedback with care build growth mindsets without fear. Shared purpose over solo glory: Families move together. In workplaces, this means shifting from individual heroism to collaborative wins. Leaders must champion shared success. Adaptability with anchoring: Just like families adjust to life stages, leaders need to flex without losing values. Adapt strategy, but anchor culture.


IPv4 was meant to be dead within a decade; what's happening with IPv6?

Globally, IPv6 is now approaching the halfway mark of Internet traffic. Google, which tracks the percentage of its users that reach it via IPv6, reports that around 46% of users worldwide access Google over IPv6 as of mid-May 2025. In other words, given the ubiquity of Google's usage, nearly half of Internet users have IPv6 capability today. While that’s a significant milestone, IPv4 still carries about half of the traffic, even though it was long expected to be retired by now. The growth has not been exponential, but it is persistent. ... The first, and arguably largest hurdle is that IPv6 was not designed to be backward-compatible with IPv4, a big criticism of IPv6 in general and largely blamed for its slow adoption. An IPv6-only device cannot directly communicate with an IPv4-only device without the help of a complex translation gateway, such as NAT64. This means networks usually run dual-stack support for both protocols, and IPv4 can't just be "switched off." This has major downsides, though; dual-stack operation doubles certain aspects of network management, requiring two address configurations, two sets of firewall rules, and more, which increases operational complexity for businesses and home users alike. This complexity causes a significant slowdown in deployment, as network engineers and software developers must ensure everything works on IPv6 in addition to IPv4. Any lack of feature parity or small misconfigurations can cause major issues.


Agentic mesh: The future of enterprise agent ecosystems

Many companies describe agents as “science experiments” that never leave the lab. Others complain about suffering the pain of “a thousand proof-of-concepts” with agents. The root cause of this pain? Most agents today aren’t designed to meet enterprise-grade standards. ... As enterprises adopt more agents, a familiar problem is emerging: silos. Different teams deploy agents in CRMs, data warehouses, or knowledge systems, but these agents operate independently, with no awareness of each other. ... An agentic mesh is a way to turn fragmented agents into a connected, reliable ecosystem. But it does more: It lets enterprise-grade agents operate in an enterprise-grade agent ecosystem. It allows agents to find each other and to safely and securely collaborate, interact, and even transact. The agentic mesh is a unified runtime, control plane, and trust framework that makes enterprise-grade agent ecosystems possible. ... Agentic mesh fulfills two major architectural goals: It lets you build enterprise-grade agents and it gives you an enterprise-grade run-time environment to support these agents. To support secure, scalable, and collaborative agents, an agentic mesh needs a set of foundational components. These capabilities ensure that agents don’t just run, but run in a way that meets enterprise requirements for control, trust, and performance.


OpenAI launches research preview of Codex AI software engineering agent for developers

The new Codex goes far beyond its predecessor. Now built to act autonomously over longer durations, Codex can write features, fix bugs, answer codebase-specific questions, run tests, and propose pull requests—each task running in a secure, isolated cloud sandbox. The design reflects OpenAI’s broader ambition to move beyond quick answers and into collaborative work. Josh Tobin, who leads the Agents Research Team at OpenAI, said during a recent briefing: “We think of agents as AI systems that can operate on your behalf for a longer period of time to accomplish big chunks of work by interacting with the real world.” Codex fits squarely into this definition. ... Codex executes tasks without internet access, drawing only on user-provided code and dependencies. This design ensures secure operation and minimizes potential misuse. “This is more than just a model API,” said Embiricos. “Because it runs in an air-gapped environment with human review, we can give the model a lot more freedom safely.” OpenAI also reports early external use cases. Cisco is evaluating Codex for accelerating engineering work across its product lines. Temporal uses it to run background tasks like debugging and test writing. Superhuman leverages Codex to improve test coverage and enable non-engineers to suggest lightweight code changes. 


AI-Driven Software: Why a Strong CI/CD Foundation Is Essential

While AI can significantly boost speed, it also drives higher throughput, increasing the demand for testing, QA monitoring, and infrastructure investment. More code means development teams need to find ways to shorten feedback loops, build times, and other key elements of the development process to keep pace. Without a solid DevOps framework and CI/CD engine to manage this, AI can create noise and distractions that drain engineers’ attention, slowing them down instead of freeing them to focus on what truly matters: delivering quality software at the right pace. ... By investing in a CI/CD platform with these capabilities, you’re not just buying a tool — you’re establishing the foundation that will determine whether AI becomes a force multiplier for your team or simply creates more noise in an already complex system. The right platform turns your CI/CD pipeline from a bottleneck into a strategic advantage, allowing your team to harness AI’s potential while maintaining quality, security, and reliability. To harness the speed and efficiency gains of AI-driven development, you need a CI/CD platform capable of handling high throughput, rapid iteration, and complex testing cycles while keeping infrastructure and cloud costs in check. ... It is easy to get caught up in the excitement of powerful technologies like AI and dive straight into experimentation without laying the right groundwork for success.


Quantum Algorithm Outpaces Classical Solvers in Optimization Tasks, Study Indicates

The study focuses on a class of problems known as higher-order unconstrained binary optimization (HUBO), which model real-world tasks like portfolio selection, network routing, or molecule design. These problems are computationally intensive because the number of possible solutions grows exponentially with problem size. On paper, those are exactly the types of problems that most quantum theorists believe quantum computers, once robust enough, would excel at solving. The researchers evaluated how well different solvers — both classical and quantum — could find approximate solutions to these HUBO problems. The quantum system used a technique called bias-field digitized counterdiabatic quantum optimization (BF-DCQO). The method builds on known quantum strategies by evolving a quantum system under special guiding fields that help it stay on track toward low-energy states. ... It is probably important to note that the researchers didn’t just rely on the quantum component and that the hybrid approach was essential in securing the quantum edge. Their BF-DCQO pipeline includes classical preprocessing and postprocessing, such as initializing the quantum system with good guesses from fast simulated annealing runs and cleaning up final results with simple local searches.


How human connection drives innovation in the age of AI

When we are working toward a shared goal, there are core values and shared aspirations that bind us. By actively seeking out this common ground and fostering positive interactions, we can all bridge divides, both in our personal lives and within our organizations.  Feeling connection is not just good for our own wellbeing, it is also crucial for business outcomes. According to research, 94% of employees say that feeling connected to their colleagues makes them more productive at work, and over four times as likely to feel job satisfaction and half as likely to leave their jobs within the next year.  ... As we integrate AI deeper into our workflows, we should be deliberate in cultivating environments that prioritize genuine human connection and the development of these essential human skills.  This means creating intentional spaces—both physical and virtual—that encourage open dialogue, active listening, and the respectful exchange of diverse perspectives. Leaders should champion empathy and relationship-building skill development within their teams, actively working to promote thoughtful opportunities for human connection in our AI-driven environment. Ultimately, the future of innovation and progress will be shaped by our ability to harness the power of AI in a way that amplifies our uniquely human capacities, especially our innate drive to connect with one another.


Enterprise Intelligence: Why AI Data Strategy Is A New Advantage

Forward-thinking enterprises are embracing cloud-native data platforms that abstract infrastructure complexity and enable a new class of intelligent, responsive applications. These platforms unify data access across object, file, and block formats while enforcing enterprise-grade governance and policy. They incorporate intelligent tiering and KV caching strategies that learn from access patterns to prioritize hot data, accelerating inference and reducing overhead. They support multimodal AI workloads by seamlessly managing petabyte-scale datasets across edge, core, and cloud locations—without burdening teams with manual tuning. And they scale elastically, adapting to growing demand without disruptive re-architecture. ... AI-driven businesses are no longer defined by how much compute power they can deploy but by how efficiently they can manage, access, and utilize data. The enterprises that rethink their data strategy—eliminating friction, reducing latency, and ensuring seamless integration across AI pipelines—will gain a decisive competitive edge. For CIOs, the message is clear: AI success isn’t just about faster algorithms or bigger models; it’s about creating a smarter, more agile data architecture. Organizations that embrace real-time, scalable data platforms will not only unlock AI’s full potential but also future-proof their operations in an increasingly data-driven world.


The future of the modern data stack: Trends and predictions

AI and ML are also key drivers of the modern data stack, because they are creating new (or greatly amplifying existing) demands on data infrastructure. Suddenly, the provenance and lineage of information is taking on new importance, as enterprises fight against “hallucinations” and accidental exposure of PII or PHI through AI mechanisms. Data sharing is also more important than ever, because no single organization is likely to host all the information needed by GenAI models itself, and will intrinsically rely on others to augment models, RAG, prompt engineering, and other approaches when building AI-based solutions. ... The goal of simplifying data management and giving more users more access to data has been around since long before computers were invented. But recent improvements in GenAI and data sharing have vastly accelerated these trends — suddenly, the idea that non-technical professionals can transform, combine, analyze, and utilize complex datasets from inside and outside an organization feels not just achievable, but probable. ... Advances in data sharing, especially heterogeneous data sharing, through common formats like Iceberg, governance approaches like Polaris, and safety and security mechanisms like Vendia IceBlock are quickly removing the historical challenges to data product distribution.