Showing posts with label AGI. Show all posts
Showing posts with label AGI. Show all posts

Daily Tech Digest - February 08, 2026


Quote for the day:

"The litmus test for our success as Leaders is not how many people we are leading, but how many we are transforming into leaders" -- Kayode Fayemi



Why agentic AI and unified commerce will define ecommerce in 2026

Agentic AI and unified commerce are set to shape ecommerce in 2026 because the foundations are now in place: consumers are increasingly comfortable using AI tools, and retailers are under pressure to operate seamlessly across channels. ... When inventory, orders, pricing, and customer context live in disconnected systems, both humans and AI struggle to deliver consistent experiences. When those systems are unified, retailers can enable more reliable automation, better availability promises, and more resilient fulfillment, especially at peak. ... Unified commerce platforms matter because they provide a single operational framework for inventory, orders, pricing, and customer context. That coordination is increasingly critical as more interactions become automated or AI-assisted. ... The shift toward “agentic” happens when AI can safely take actions, like resolving a customer service step, updating a product feed, or proposing a replenishment recommendation, based on reliable data and explicit rules. That’s why unified commerce matters: it reduces the risk of automation acting on partial truth. Because ROI varies dramatically by category, maturity, and data quality, it’s safer to avoid generic percentage claims. The defensible message is: companies that pair AI with clean operational data and clear governance will unlock automation faster and with fewer reputational risks. ... Ultimately, success in 2026 will not be defined by how many AI features a retailer deploys, but by how well their systems can interpret context, act reliably, and scale under pressure.


EU's Digital Sovereignty Depends On Investment In Open-Source And Talent

We argue that Europe must think differently and invest where it matters, leveraging its strengths, and open technologies are the place to look. While Europe does not have the tech giants of the US and China, it possesses a huge pool of innovation and human capital, as well as a small army of capable and efficient technology service providers, start-ups, and SMEs. ... Recent data shows that while Europe accounts for a substantial share of global open source developers, its contribution to open source-derived infrastructure remains fragmented across countries, with development being concentrated in a small number of countries. ... Europe may not have a Silicon Valley, but it has something better: a robust open source workforce. We are beginning to recognize this through fora such as the recent European Open Source Awards, which celebrated European citizens and residents working on things ranging from the Linux kernel and open office suites to open hardware and software preservation. ... Europe has a chance of succeeding. Historically, Europe has done a good job in making open source and open standards a matter of public policy. For example, the European Commission's DG DIGIT has an open source software strategy which is being renewed this year, and Europe possesses three European Standards Organizations, including CEN, CENELEC, and ETSI. While China has an open source software strategy, Europe is arguably leading the US in harnessing the potential of open technologies as a matter of public and industrial policy, and it has a strong foundation for catching up to China.


Is artificial general intelligence already here? A new case that today's LLMs meet key tests

Approaching the AGI question from different disciplinary perspectives—philosophy, machine learning, linguistics, and cognitive science—the four scholars converged on a controversial conclusion: by reasonable standards, current large language models (LLMs) already constitute AGI. Their argument addresses three key questions: What is general intelligence? Why does this conclusion provoke such strong reactions? And what does it mean for ... "There is a common misconception that AGI must be perfect—knowing everything, solving every problem—but no individual human can do that," explains Chen, who is lead author. "The debate often conflates general intelligence with superintelligence. The real question is whether LLMs display the flexible, general competence characteristic of human thought. Our conclusion: insofar as individual humans possess general intelligence, current LLMs do too." ... "This is an emotionally charged topic because it challenges human exceptionalism and our standing as being uniquely intelligent," says Belkin. "Copernicus displaced humans from the center of the universe, Darwin displaced humans from a privileged place in nature; now we are contending with the prospect that there are more kinds of minds than we had previously entertained." ... "We're developing AI systems that can dramatically impact the world without being mediated through a human and this raises a host of challenging ethical, societal, and psychological questions," explains Danks.


Biometrics deployments at scale need transparency to help businesses, gain trust

As adoption invites scrutiny, more biometrics evaluations, completed assessments and testing options come available. Communication is part of the same issue, with major projects like EES, U.S. immigration and protest enforcement, and more pedestrian applications like access control and mDLs all taking off. ... Biometric physical access control is growing everywhere, but with some key sectorial and regional differences, Goode Intelligence Chief Analyst Alan Goode explains in a preview of his firm’s latest market research report on the latest episode of the Biometric Update Podcast. Imprivata could soon be on the market, with PE owner Thoma Bravo working with JPMorgan and Evercore to begin exploring its options. ... A panel at the “Identity, Authentication, and the Road Ahead 2026” event looked at NIST’s work on a playbook to help businesses implement mDLs. Representatives from the NCCoE, Better Identity Coalition, PNC Bank and AAMVA discussed the emerging situation, in which digital verifiable credentials are available, but people don’t know how to use them. ... DHS S&T found 5 of 16 selfie biometrics providers met the performance goals of its Remote Identity Validation Rally, Shufti and Paravision among them. RIVR’s first phase showed that demographically similar imposters still pose a significant problem for many face biometrics developers.


The Invisible Labor Force Powering AI

A low-cost labor force is essential to how today’s AI models function. Human workers are needed at every stage of AI production for tasks like creating and annotating data, reinforcing models, and moderating content. “Today’s frontier models are not self-made. They’re socio-technical systems whose quality and safety hinge on human labor,” said Mark Graham, a professor at the University of Oxford Internet Institute and a director of the Fairwork project, which evaluates digital labor platforms. In his book Feeding the Machine: the Hidden Human Labor Powering AI (Bloomsbury, 2024), Graham and his co-authors illustrate that this global workforce is essential to making these systems usable. “Without an ongoing, large human-in-the-loop layer, current capabilities would be far more brittle and misaligned, especially on safety-critical or culturally sensitive tasks,” Graham said. ... The industry’s reliance on a distributed, gig-work model goes back years. Hung points to the creation of the ImageNet database around 2007 as the moment that set the referential data practices and work organization for modern AI training. ... However, cost is not the only factor. Graham noted that cost arbitrage plays a role, but it is not the whole explanation. AI labs, he said, need extreme scale and elasticity, meaning millions of small, episodic tasks that can be staffed up or down at short notice, as well as broad linguistic and cultural coverage that no single in-house team can reproduce.


Code smells for AI agents: Q&A with Eno Reyes of Factory

In order to build a good agent, you have to have one that's model agnostic. It needs to be deployable in any environment, any OS, any IDE. A lot of the tools out there force you to make a hard trade off that we felt wasn't necessary. You either have to vendor lock yourself to one LLM or ask everyone at your company to switch IDEs. To build like a true model agnostic, vendor agnostic coding agent, you put in a bunch of time and effort to figure out all the harness engineering that's necessary to make that succeed, which we think is a fairly different skillset from building models. And so that's why we think companies like us actually are able to build agents that outperform on most evaluations from our lab. ... All LLMs have context limits so you have to manage that as the agent progresses through tasks that may take as long as eight to ten hours of continuous work. There are things like how you choose to instruct or inject environment information. It's how you handle tool calls. The sum of all of these things requires attention to detail. There really is no individual secret. Which is also why we think companies like us can actually do this. It's the sum of hundreds of little optimizations. The industrial process of building these harnesses is what we think is interesting or differentiated. ... Of course end-to-end and unit tests. There are auto formatters that you can bring in, SaaS static application security testers and scanners: your sneaks of the world.


Software-Defined Vehicles Transform Auto Industry With Four-Stage Maturity Framework For Engineers

More refined software architectures in both edge and cloud enable the interpretation of real-time data for predictive maintenance, adaptive user interfaces, and autonomous driving functions, while cloud-based AI virtualized development systems enable continuous learning and updates. Electrification has only further accelerated this evolution as it opened the door for tech players from other industries to enter the automotive market. This represents an unstoppable trend as customers now expect the same seamless digital experiences they enjoy on other devices. ... Legacy vehicle systems rely on dozens of electronic control units (ECUs), each managing isolated functions, such as powertrain or infotainment systems. SDVs consolidate these functions into centralized compute domains connected by high-speed networks. This architecture provides hardware and software abstraction, enabling OTA updates, seamless cross-domain feature integration, and real-time data sharing, are essential for continuous innovation. ... Processing sensor data at the edge – directly within the vehicle – enables highly personalized experiences for drivers and passengers. It also supports predictive maintenance, allowing vehicles to anticipate mechanical issues before they occur and proactively schedule service to minimize downtime and improve reliability. Equally important are abstraction layers that decouple software applications from underlying hardware.


Cybersecurity and Privacy Risks in Brain-Computer Interfaces and Neurotechnology

Neuromorphic computing is developing faster than predicted by replicating the human brain's neural architecture for efficient, low-power AI computation. As highlighted in talks around brain-inspired chips and meshing, these systems are blurring distinctions between biological and silicon-based computation. In the meanwhile, bidirectional communication is made possible by BCIs, such as those being developed by businesses and research facilities, which can read brain activity for feedback or control and possibly write signals back to affect cognition. ... Neural data is essentially personal. Breaches could expose memories, emotions, or subconscious biases. Adversaries may reverse-engineer intentions for coercion, fraud, or espionage as AI decodes brain scans for "mind captioning" or talent uploading. ... Compromised BCIs blur cyber-physical boundaries farther than OT-IT convergence already has. A malevolent actor might damage medical implants, alter augmented reality overlays, or weaponize neurotech in national security scenarios. ... Implantable devices rely on worldwide supply chains prone to tampering. Neuromorphic hardware, while efficient, provides additional attack surfaces if not designed with zero-trust principles. Using AI to process neural signals can introduce biases, which may result in unfair treatment in brain-augmented systems 


Designing for Failure: Chaos Engineering Principles in System Design

To design for failure, we must understand how the system behaves when failure inevitably happens. What is the cost? What is the impact? How do we mitigate it? How do we still maintain over 99% uptime? This requires treating failure as a default state, not an exception. ... The first step is defining steady-state behavior. Without this, there is no baseline to measure against. ... Chaos experiments are most valuable in production. This is where real traffic patterns, real user behavior, and real data shapes exist. That said, experiments must be controlled. ... Chaos Engineering is not a one-off exercise. Systems evolve. Dependencies change. Teams rotate. Experiments should be automated, repeatable, and run continuously, either as scheduled jobs or integrated into CI/CD pipelines. Over time, experiments can be expanded to test higher-impact scenarios. ... Additional considerations include health checks, failover timing, and data consistency. Strong consistency simplifies reasoning but reduces availability. Eventual consistency improves availability but introduces complexity and potential inconsistency windows. ... Network failures are unavoidable in distributed systems. Latency spikes, packets get dropped, DNS fails, and sometimes the network splits entirely. Many system outages are not caused by servers crashing, but by slow or unreliable communication between otherwise healthy components. This is where several of the classic fallacies of distributed computing show up, especially the assumption that the network is reliable and has zero latency.


Why SMBs Need Strong Data Governance Practices

Good data governance for small businesses is about building trust, control and scalability into your data from day one. Governance should be built into the data foundation, not bolted on later. Small businesses move fast, and governance works best when it’s native to how data is managed. That means choosing platforms that apply security, access controls and compliance consistently across all data, without requiring manual oversight or specialized teams. Additionally, clear visibility and control over what data exists and who can access it is essential. Even at a smaller scale, businesses handle sensitive information ranging from customer and financial data to operational insights. ... Governance also future proofs the business. Regulations are becoming more complex, customer expectations for data protection are rising, and AI systems must have high-quality, well-governed data to perform reliably. Small businesses that treat governance as a foundation are better positioned to adopt AI and safely expand into new use cases, markets and regulatory environments without needing to rearchitect later. At the same time, strong data governance improves day-to-day efficiency. When data is well governed, teams can spend more time acting on insights and less time questioning data quality, managing access manually or duplicating work. ... From a cybersecurity perspective, governance provides the controls and visibility needed to reduce attack surfaces and detect misuse. 

Daily Tech Digest - October 27, 2025


Quote for the day:

“There is no failure except in no longer trying.” -- Chris Bradford


AWS Outage Is Just the Latest Internet Glitch Banks Must Insulate Against

If clouds fail or succumb to cyberattacks, the damage can be enormous, measured only by the maliciousness and creativity of the hacker and the redundancy and resilience of the defenses that users have in place. ... As I describe in The Unhackable Internet, we are already way down the rabbit hole of cyber insecurity. It would take a massive coordinated global effort to secure the current internet. That is unlikely to happen. Therefore, the most realistic business strategy is to assume the inevitable: A glitch, human error or a successful breach or cloud failure will occur. That means systems must be in place to distribute patches, resume operations, reconstruct networks, and recover lost data. Redundancy is a necessary component to get back online, but how much redundancy is feasible or economically sustainable? And will those backstops actually work? ... Given these ever-increasing challenges and cyber incursions in the financial services business, I have argued for a fundamental change in regulation — one that will keep regulators on the cutting edge of digital and cybersecurity developments. To accomplish that, regulation should be a more collaborative experience that invests the financial industry in its own oversight and systemic security. This effort should include industry executives and their staffs. Their expertise in the oversight process would enrich the quality of regulation, particularly from the perspective of strengthening the cyber defenses of the industry.


The 10 biggest issues CISOs and cyber teams face today

“It’s not finger-pointing; we’re all learning,” Lee says. “Business is now expected to embrace and move quickly with AI. Boards and C-level executives are saying, ‘We have to lean into this more’ and then they turn to security teams to support AI. But security doesn’t fully understand the risk. No one has this down because it’s moving so fast.” As a result, many organizations skip security hardening in their rush to embrace AI. But CISOs are catching up. ... Moreover, Todd Moore, global vice president of data security at Thales, says CISOs are facing a torrent of AI-generated data — generally unstructured data such as chat logs — that needs to be secured. “In some aspects, AI is becoming the new insider threat in organizations,” he says. “The reason why I say it’s a new insider threat is because there’s a lot of information that’s being put in places you never expected. CISOs need to identify and find that data and be able to see if that data is critical and then be able to protect it.” ... “We’re now getting to the stage where no one is off-limits,” says Simon Backwell, head of information security at tech company Benifex and a member of ISACA’s Emerging Trends Working Group. “Attack groups are getting bolder, and they don’t care about the consequences. They want to cause mass destruction.”


The AI Inflection Point Isn’t in the Cloud, It’s at the Edge

Beyond the screen, there is a need for agentic applications that specifically reduce latency and improve throughput. “You need an agentic architecture with several things going on,” Shelby said about using models to analyze the packaging of pharmaceuticals, for instance. “You might need to analyze the defects. Then you might need an LLM with a RAG behind it to do manual lookup. That’s very complex. It might need a lot of data behind it. It might need to be very large. You might need 100 billion parameters.” The analysis, he noted, may require integration with a backend system to perform another task, necessitating collaboration among several agents. AI appliances are then necessary to manage multiagent workflows and larger models. ... The nature of LLMs, Shelby said, requires a person to tell you if the LLM’s output is correct, which in turn impacts how to judge the relevancy of LLMs in edge environments. It’s not like you can rely on an LLM to provide an answer to a prompt. Consider a camera in the Texas landscape, focusing on an oil pump, Shelby said. “The LLM is like, ‘Oh, there are some campers cooking some food,’ when really there’s a fire” at the oil pump. So, how do you make the process testable in a way that engineers expect, Shelby asked. It requires end-to-end guard rails. And that’s why random, cloud-based LLMs do not yet apply to industrial environments.


Scaling Identity Security in Cloud Environments

One significant challenge organizations face is the disconnect between security and research and development (R&D) teams. This gap can lead to vulnerabilities being overlooked during the development phase, resulting in potential security risks once new systems are operational in cloud environments. To bridge this gap, a collaborative approach involving both teams is essential. Creating a secure cloud environment necessitates an understanding of the specific needs and challenges faced by each department. ... The journey to achieving scalable identity security in cloud environments is ongoing and requires constant vigilance. By integrating NHI management into their cybersecurity strategies, organizations can reduce risks, increase efficiencies, and ensure compliance with regulatory requirements. With security continue to evolve, staying informed and adaptable remains key. To gain further insights into cybersecurity, you might want to read about some cybersecurity predictions for 2025 and how they may influence your strategies surrounding NHI management. The integration of effective NHI and secrets management into cloud security controls is not just recommended but necessary for safeguarding data. It’s an invaluable part of a broader cybersecurity strategy aimed at minimizing risk and ensuring seamless, secure operations across all sectors.


Owning the Fallout: Inside Blameless Culture

For an organization to truly own the fallout after an incident, there must be a cultural shift from blame to inquiry. A ‘blameless culture’ doesn’t mean it’s a free-for-all, with no accountability. Instead, it’s a circumstance where the first question after an incident isn’t “Who screwed up?” it’s “What failed — and why?” As Gustavo Razzetti describes, “blame is a sign of an unhealthy culture,” and the goal is to replace it with curiosity. In a blameless postmortem, you break down what happened, map the contributing systemic factors, and focus on where processes, tooling, or assumptions broke down. This mindset aligns with the concept of just culture, which balances accountability and systems thinking. After an incident, the focus is to ask how things went wrong, not whom to punish — unless egregious misconduct is involved. ... The most powerful learning happens in the moment when incident patterns redirect strategic priorities. For example, during post-mortems, a team could discover that under-monitored dependencies cause high-severity incidents. With a resilience mindset, that insight can become an objective: “Build automated dependency-health dashboards by Q2.” When feedback and insights flow into OKRs, teams internalize resilience as part of delivery, not an afterthought. Resilient teams move beyond damage control to institutional learning. 


Can your earbuds recognize you? Researchers are working on it

Each person’s ear canal produces a distinct acoustic signature, so the researchers behind EarID designed a method that allows earbuds to identify their wearer by using sound. The earbuds emit acoustic signals into the user’s ear canal, and the reflections from that sound reveal patterns shaped by the ear’s structure. What makes this study stand out is that the authentication process happens entirely on the earbuds themselves. The device extracts a unique binary key based on the user’s ear canal shape and then verifies that key on the paired mobile device. By working with binary keys instead of raw biometric data, the system avoids sending sensitive information over Bluetooth. This helps prevent interception or replay attacks that could expose biometric data. ... A key part of the research is showing that earbuds can handle biometric processing without large hardware or cloud support. EarID runs on a small microcontroller comparable to those found in commercial earbuds. The researchers measured performance on an Arduino platform with an 80 MHz chip and found that it could perform the key extraction in under a third of a second. For comparison, traditional machine learning classifiers took three to ninety times longer to train and process data. This difference could make a real impact if ear canal authentication ever reaches consumer devices, since users expect quick and seamless authentication.


What It 'Techs' to Run Real-Time Payments at Scale

Beyond hosting applications, the architecture is designed for scale, reuse and rapid provisioning. APIs and services support multiple verticals including lending, insurance, investments and even quick commerce through a shared infrastructure-as-a-service model. "Every vertical uses the same underlying infra, and we constantly evaluate whether something can be commoditized for the group and then scaled centrally. It's easier to build and scale one accounting stack than reinvent it every time," Nigam said. Early investments in real-time compute systems and edge analytics enable rapid anomaly detection and insights, cutting operational downtime by 30% and improving response times to under 50 milliseconds. A recent McKinsey report on financial infrastructure in emerging economies underscores the importance of edge computation and near-real-time monitoring for high-volume payments networks - a model increasingly being adopted by global fintech leaders to ensure both speed and reliability. ... Handling spikes and unexpected surges is another critical consideration. India's payments ecosystem experiences predictable peaks - including festival seasons or IPL weekends - and unpredictable surges triggered by government announcements or regulatory deadlines. When a payments platform is built for population scale, any single merchant or use case does not create a surge at this level. 


Who’s right — the AI zoomers or doomers?

Earlier this week, the Emory Wheel editorial board published an opinion column claiming that without regulation, AI will soon outpace humanity’s ability to control it. The post said AI’s uncontrolled evolution threatens human autonomy, free expression, and democracy, stressing that the technical development is faster than what lawmakers can handle. ... Both zoomers and doomers agree that humanity’s fate will be decided when the industry releases AGI or superintelligent AI. But there’s strong disagreement on when that will happen. From OpenAI’s Sam Altman to Elon Musk, Eric Schmidt, Demis Hassabis, Dario Amodei, Masayoshi Son, Jensen Huang, Ray Kurzweil, Louis Rosenberg, Geoffrey Hinton, Mark Zuckerberg, Ajeya Cotra, and Jürgen Schmidhuber — all predict AGI by later this year to later this decade. ... Some say we need strict global rules, maybe like those for nuclear weapons. Others say strong laws would slow progress, stop new ideas, and give the benefits of AI to China. ... AI is already causing harms. It contributes to privacy invasion, disinformation and deepfakes, surveillance overreach, job displacement, cybersecurity threats, child and psychological harms, environmental damage, erosion of human creativity and autonomy, economic and political instability, manipulation and loss of trust in media, unjust criminal justice outcomes, and other problems.


Powering Data in the Age of AI: Part 3 – Inside the AI Data Center Rebuild

You can’t design around AI the way data centers used to handle general compute. The loads are heavier, the heat is higher, and the pace is relentless. You start with racks that pull more power than entire server rooms did a decade ago, and everything around them has to adapt. New builds now work from the inside out. Engineers start with workload profiles, then shape airflow, cooling paths, cable runs, and even structural supports based on what those clusters will actually demand. In some cases, different types of jobs get their own electrical zones. That means separate cooling loops, shorter throw cabling, dedicated switchgear — multiple systems, all working under the same roof. Power delivery is changing, too. In a conversation with BigDATAwire, David Beach, Market Segment Manager at Anderson Power, explained, “Equipment is taking advantage of much higher voltages and simultaneously increasing current to achieve the rack densities that are necessary. This is also necessitating the development of components and infrastructure to properly carry that power.” ... We know that hardware alone doesn’t move the needle anymore. The real advantage comes from pushing it online quickly, without getting bogged down by power, permits, and other obstacles. That’s where the cracks are beginning to open.


Strategic Domain-Driven Design: The Forgotten Foundation of Great Software

The strategic aspect of DDD is often overlooked because many people do not recognize its importance. This is a significant mistake when applying DDD. Strategic design provides context for the model, establishes clear boundaries, and fosters a shared understanding between business and technology. Without this foundation, developers may focus on modeling data rather than behavior, create isolated microservices that do not represent the domain accurately, or implement design patterns without a clear purpose. ... The first step in strategic modeling is to define your domain, which refers to the scope of knowledge and activities that your software intends to address. Next, we apply the age-old strategy of "divide and conquer," a principle used by the Romans that remains relevant in modern software development. We break down the larger domain into smaller, focused areas known as subdomains. ... Once the language is aligned, the next step is to define bounded contexts. These are explicit boundaries that indicate where a particular model and language apply. Each bounded context encapsulates a subset of the ubiquitous language and establishes clear borders around meaning and responsibilities. Although the term is often used in discussions about microservices, it actually predates that movement. 

Daily Tech Digest - September 03, 2025


Quote for the day:

“The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things.” -- Ronald Reagan



Understanding Problems in the Data Supply Chain: A Q&A with R Systems’ AI Director Samiksha Mishra

Think of data as moving through a supply chain: it’s sourced, labeled, cleaned, transformed, and then fed into models. If bias enters early – through underrepresentation in data collection, skewed labeling, or feature engineering – it doesn’t just persist but multiplies as the data moves downstream. By the time the model is trained, bias is deeply entrenched, and fixes can only patch symptoms, not address the root cause. Just like supply chains for physical goods need quality checks at every stage, AI systems need fairness validation points throughout the pipeline to prevent bias from becoming systemic. ... The key issue is that a small representational bias can be significantly amplified across the AI data supply chain due to reusability and interdependencies. When a biased dataset is reused, its initial flaw is propagated to multiple models and contexts. This is further magnified during preprocessing, as methods like feature scaling and augmentation can encode a biased feature into multiple new variables, effectively multiplying its weight. ... One effective way to integrate validation layers and bias filters into AI systems without sacrificing speed is to design them as lightweight checkpoints throughout the pipeline rather than heavy post-hoc add-ons. At the data stage, simple distributional checks such as χ² tests or KL-divergence can flag demographic imbalances at low computational cost. 



Hackers Manipulate Claude AI Chatbot as Part of at Least 17 Cyber Attacks

While AI’s use in hacking has largely been a case of hype over actual threat to present, this new development is a concrete indicator that it is at minimum now substantially lowering the threshold for non-technical actors to execute viable cyber attacks. It is also clearly capable of speeding up and automating certain common aspects of attacks for the more polished professional hackers, increasing their output capability during windows in which they have the element of surprise and novelty. While the GTG-2002 activity is the most complex thus far, the threat report notes the Claude AI chatbot has also been successfully used for more individualized components of various cyber attacks. This includes use by suspected North Korean state-sponsored hackers as part of their remote IT worker scams, to include not just crafting detailed personas but also taking employment tests and doing day-to-day work once hired. Another highly active party in the UK has been using Claude to develop individual ransomware tools with sophisticated capabilities and sell them on underground forums, at a price of $400 to $1,200 each. ... Anthropic says that it has responded to the cyber attacks by adding a tailored classifier specifically for the observed activity and a new detection method to ensure similar activity is captured by the standard security pipeline. 


Agentic AI: Storage and ‘the biggest tech refresh in IT history’

The interesting thing about agentic infrastructure is that agents can ultimately work across a number of different datasets, and even in different domains. You have kind of two types of agents – workers, and other agents, which are supervisors or supervisory agents. So, maybe I want to do something simple like develop a sales forecast for my product while reviewing all the customer conversations and the different databases or datasets that could inform my forecast. Well, that would take me to having agents that work on and process a number of different independent datasets that may not even be in my datacentre.  ... So, anything that requires analytics requires a data warehouse. Anything that requires an understanding of unstructured data not only requires a file system or an object storage system, but it also requires a vector database to help AI agents understand what’s in those file systems through a process called retrieval augmented generative AI. The first thing that needs to be wrestled down is a reconciliation of this idea that there’s all sorts of different data sources, and all of them need to be modernised or ready for the AI computing that is about to hit these data sources. ... The first thing I would say is that there are best practices out in the market that should definitely be adhered to. 


Tech leaders: Are you balancing AI transformation with employee needs?

On the surface, it might seem naïve for companies to talk about AI building people up and improving jobs when there’s so much negative news about its potential impact on employment. For example, Ford CEO Jim Farley recently predicted that AI will replace half of all white-collar workers in the US. Also, Fiverr CEO Micha Kaufman sent a memo to his team in which he said, “AI is coming for your job. Heck, it’s coming for my job, too. This is a wake-up call. It doesn’t matter if you’re a programmer, designer, product manager, data scientist, lawyer, customer support rep, salesperson, or a finance person. AI is coming for you.” Several tech companies like Google, Microsoft, Amazon, and Salesforce have also been talking about how much of their work is already being done by AI. Of course, tech executives could just be hyping the technology they sell. But not all AI-related layoffs may actually be due to AI. ... AI, especially agentic AI, is changing the nature of work, and how companies will need to be organized, says Mary Alice Vuicic, chief people officer at Thomson Reuters. “Many companies ripped up their AI plans as agentic AI came to the forefront,” she says, as it’s moved on from being an assistant to being a team that works together to accomplish delegated tasks. This has the potential for unprecedented productivity improvements, but also unprecedented opportunities for augmentation, expansion, and growth. 


When rivals come fishing: What keeps talent from taking the bait

Organisations can and do protect themselves with contracts—non-compete agreements, non-solicitation rules, confidentiality policies. They matter because they protect sensitive knowledge and prevent rivals from taking shortcuts. But they are not the same as retention. An employee with ambition, if disengaged, will eventually walk. ... If money were the sole reason employees left, the problem would be simpler. Counter-offers would solve it, at least temporarily. But every HR leader knows the story: a high performer accepts a lucrative counter-offer, only to resign again six months later. The issue lies elsewhere—career stagnation, lack of recognition, weak culture, or a disconnect with leadership. ... What works instead is open dialogue, competitive but fair rewards, and most importantly, visible career pathways. Employees, she stresses, need to feel that their organisation is invested in their long-term development, not just scrambling to keep them for another year. Tiwari also highlights something companies often neglect: succession planning. By identifying and nurturing future leaders early, organisations create continuity and reduce the shock when someone does leave. Alongside this, clear policies and awareness about confidentiality ensure that intellectual property remains protected even in times of churn. The recent frenzy of AI talent raids among global tech giants is an extreme example of this battle. 



Agentic AI: A CISO’s security nightmare in the making?

CISOs don’t like operating in the dark, and this is one of the risks agentic AI brings. It can be deployed autonomously by teams or even individual users through a variety of applications without proper oversight from security and IT departments. This creates “shadow AI agents” that can operate without controls such as authentication, which makes it difficult to track their actions and behavior. This in turn can pose significant security risks, because unseen agents can introduce vulnerabilities. ... Agentic AI introduces the ability to make independent decisions and act without human oversight. This capability presents its own cybersecurity risk by potentially leaving organizations vulnerable. “Agentic AI systems are goal-driven and capable of making decisions without direct human approval,” Joyce says. “When objectives are poorly scoped or ambiguous, agents may act in ways that are misaligned with enterprise security or ethical standards.” ... Agents often collaborate with other agents to complete tasks, resulting in complex chains of communication and decision-making, PwC’s Joyce says. “These interactions can propagate sensitive data in unintended ways, creating compliance and security risks,” he says. ... Many early stage agents rely on brittle or undocumented APIs or browser automation, Mayham says. “We’ve seen cases where agents leak tokens via poorly scoped integrations, or exfiltrate data through unexpected plugin chains. The more fragmented the vendor stack, the bigger the surface area for something like this to happen,” he says. 


How To Get The Best Out Of People Without Causing Burnout At Work

Comfort zones feel safe, but they also limit growth. Employees who stick with what they know may appear steady, but eventually they stagnate. Leaders who let people stay in their comfort zones for too long risk creating teams that lack adaptability. At the same time, pushing too aggressively can backfire. People who are stretched too far too quickly often feel stress and that drains motivation. This is when burnout at work begins. The real challenge is knowing how to respect comfort zones while creating enough stretch to build confidence. ... Gallup’s research shows that employees who use their strengths daily are six times more likely to be engaged. Tom Rath, co-author of StrengthsFinder, told me that leaning into natural talents is often the fastest path to confidence and performance gains. At the same time, he cautioned me against the idea that we should only focus on strengths. He said it is just as reckless to ignore weaknesses as it is to ignore strengths. His point was that leaders need balance. Too much time spent on weaknesses drains confidence, but avoiding them altogether prevents people from growing. ... It is not always easy to tell if resistance is fear or indifference. Fear usually comes with visible anxiety. The employee avoids the task but also worries about it. Laziness looks more like indifference with no visible discomfort. Leaders can uncover the difference by asking questions. If it is fear, support and small steps can help. If it is indifference, accountability and clear expectations may be the solution. 


IT Leadership Takes on AGI

“We think about AGI in terms of stepwise progress toward machines that can go beyond visual perception and question answering to goal-based decision-making,” says Brian Weiss, chief technology officer at hyperautomation and enterprise AI infrastructure provider Hyperscience, in an email interview. “The real shift comes when systems don’t just read, classify and summarize human-generated document content, but when we entrust them with the ultimate business decisions.” ... OpenAI’s newly released GPT-5 isn’t AGI, though it can purportedly deliver more useful responses across different domains. Tal Lev-Ami, CTO and co-founder of media optimization and visual experience platform provider Cloudinary, says “reliable” is the operative word when it comes to AGI. ... “We may see impressive demonstrations sooner, but building systems that people can depend on for critical decisions requires extensive testing, safety measures, and regulatory frameworks that don't exist yet,” says Bosquez in an email interview. ... Artificial narrow intelligence or ANI (what we’ve been using) still isn’t perfect. Data is often to blame, which is why there’s a huge push toward AI-ready data. Yet, despite the plethora of tools available to manage data and data quality, some enterprises are still struggling. Without AI-ready data, enterprises invite reliability issues with any form of AI. “Today’s systems can hallucinate or take rogue actions, and we’ve all seen the examples. 


How Causal Reasoning Addresses the Limitations of LLMs in Observability

A new class of AI-based observability solutions built on LLMs is gaining traction as they promise to simplify incident management, identify root causes, and automate remediation. These systems sift through high-volume telemetry, generate natural-language summaries based on their findings, and propose configuration or code-level changes. Additionally, with the advent of agentic AI, remediation workflows can be automated to advance the goal of self-healing environments. However, such tools remain fundamentally limited in their ability to perform root-cause analysis for modern applications. ... In observability contexts, LLMs can interpret complex logs and trace messages, summarize high-volume telemetry, translate natural-language queries into structured filters, and synthesize scripts or configuration changes to support remediation. Most LLM solutions rely on proprietary providers such as OpenAI and Anthropic, whose training data is opaque and often poorly aligned with specific codebases or deployment environments. More fundamentally, LLMs can only produce text.  ... Agentic AI shifts observability workflows from passive diagnostics to active response by predicting failure paths, initiating remediations, and executing tasks such as service restarts, configuration rollbacks, and state validation.


The Future of Work Is Human: Insights From Workday and Deloitte Leader

While AI can do many things, Chalwin acknowledges, "it can't replace, especially as a leader, that collaboration with your team, ethical decision making, creativity and strategic thinking.” But what it can do is free up time from more manual tasks, allowing people to focus on more impactful work. When asked about shifting focus from traditional training to creating opportunities for adaptation and innovation, Zucker emphasized the value of determining the balance of empowering people and giving them time and access to new capabilities to develop new skills. She noted, "People need to feel comfortable with trying things.” This requires helping the workforce understand how to make decisions, be creative, and trust the integrity of the tools and data.... “We’re all on a path of continuous learning.” She remembers leadership development class where participants were encouraged to "try it, and try it again" with AI tools. This environment fosters understanding and challenges individuals to apply AI in their daily work, enabling the workforce to evolve and continually bolster skills. Chalwin points out that the workforce dynamics are constantly changing, with a mix of human and machine collaboration altering each leader's role. Leaders must ensure that they have the right people focusing on the right things and leveraging the power of technology to do some, but not all of the work.

Daily Tech Digest - June 30, 2025


Quote for the day:

"Sheep are always looking for a new shepherd when the terrain gets rocky." -- Karen Marie Moning


The first step in modernization: Ditching technical debt

At a high level, determining when it’s time to modernize is about quantifying cost, risk, and complexity. In dollar terms, it may seem as simple as comparing the expense of maintaining legacy systems versus investing in new architecture. But the true calculation includes hidden costs, like the developer hours lost to patching outdated systems, and the opportunity cost of not being able to adapt quickly to business needs. True modernization is not a lift-and-shift — it’s a full-stack transformation. That means breaking apart monolithic applications into scalable microservices, rewriting outdated application code into modern languages, and replacing rigid relational data models with flexible, cloud-native platforms that support real-time data access, global scalability, and developer agility. Many organizations have partnered with MongoDB to achieve this kind of transformation. ... But modernization projects are usually a balancing act, and replacing everything at once can be a gargantuan task. Choosing how to tackle the problem comes down to priorities, determining where pain points exist and where the biggest impacts to the business will be. The cost of doing nothing will outrank the cost of doing something.


Is Your CISO Ready to Flee?

“A well-funded CISO with an under-resourced security team won’t be effective. The focus should be on building organizational capability, not just boosting top salaries.” While Deepwatch CISO Chad Cragle believes any CISO just in the role for the money has “already lost sight of what really matters,” he agrees that “without the right team, tools, or board access, burnout is inevitable.” Real impact, he contends, “only happens when security is valued and you’re empowered to lead.” Perhaps that stands as evidence that SMBs that want to retain their talent or attract others should treat the CISO holistically. “True professional fulfillment and long-term happiness in the CISO role stems from the opportunities for leadership, personal and professional growth, and, most importantly, the success of the cybersecurity program itself,” says Black Duck CISO Bruce Jenkins. “When cyber leaders prioritize the development and execution of a comprehensive, efficient, and effective program that delivers demonstrable value to the business, appropriate compensation typically follows as a natural consequence.” Concerns around budget constraints is that all CISOs at this point (private AND public sector) have been through zero-based budget reviews several times. If the CISO feels unsafe and unable to execute, they will be incentivized to find a safer seat with an org more prepared to invest in security programs.


AI is learning to lie, scheme, and threaten its creators

For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." ... "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around." Researchers are exploring various approaches to address these challenges.


The network is indeed trying to become the computer

Think of the scale-up networks such as the NVLink ports and NVLink Switch fabrics that are part and parcel of an GPU accelerated server node – or, these days, a rackscale system like the DGX NVL72 and its OEM and ODM clones. These memory sharing networks are vital for ever-embiggening AI training and inference workloads. As their parameter counts and token throughput requirements both rise, they need ever-larger memory domains to do their work. Throw in a mixture of expert models and the need for larger, fatter and faster scale-up networks, as they are now called, is obvious even to an AI model with only 7 billion parameters. ... Then there is the scale-out network, which is used to link nodes in distributed systems to each other to share work in a less tightly coupled way than the scale-up network affords. This is the normal networking we are familiar with in distributed HPC systems, which is normally Ethernet or InfiniBand and sometimes proprietary networks like those from Cray, SGI, Fujitsu, NEC, and others from days gone by. On top of this, we have the normal north-south networking stack that allows people to connect to systems and the east-west networks that allow distributed corporate systems running databases, web infrastructure, and other front-office systems to communicate with each other. 


What Can We Learn From History’s Most Bizarre Software Bugs?

“It’s never just one thing that causes failure in complex systems.” In risk management, this is known as the Swiss cheese model, where flaws that occur in one layer aren’t as dangerous as deeper flaws overlapping through multiple layers. And as the Boeing crash proves, “When all of them align, that’s what made it so deadly.” It is difficult to test for every scenario. After all, the more inputs you have, the more possible outputs — and “this is all assuming that your system is deterministic.” Today’s codebases are massive, with many different contributors and entire stacks of infrastructure. “From writing a piece of code locally to running it on a production server, there are a thousand things that could go wrong.” ... It was obviously a communication failure, “because NASA’s navigation team assumed everything was in metric.” But you also need to check the communication that’s happening between the two systems. “If two systems interact, make sure they agree on formats, units, and overall assumptions!” But there’s another even more important lesson to be learned. “The data had shown inconsistencies weeks before the failure,” Bajić says. “NASA had seen small navigation errors, but they weren’t fully investigated.”


Europe’s AI strategy: Smart caution or missed opportunity?

Companies in Europe are spending less on AI, cloud platforms, and data infrastructure. In high-tech sectors, productivity growth in the U.S. has far outpaced Europe. The report argues that AI could help close the gap, but only if it is used to redesign how businesses operate. Using AI to automate old processes is not enough. ... Feinberg also notes that many European companies assumed AI apps would be easier to build than traditional software, only to discover they are just as complex, if not more so. This mismatch between expectations and reality has slowed down internal projects. And the problem isn’t unique to Europe. As Oliver Rochford, CEO of Aunoo AI, points out, “AI project failure rates are generally high across the board.” He cites surveys from IBM, Gartner, and others showing that anywhere from 30 to 84 percent of AI projects fail or fall short of expectations. “The most common root causes for AI project failures are also not purely technical, but organizational, misaligned objectives, poor data governance, lack of workforce engagement, and underdeveloped change management processes. Apparently Europe has no monopoly on those.”


A Developer’s Guide to Building Scalable AI: Workflows vs Agents

Sometimes, using an agent is like replacing a microwave with a sous chef — more flexible, but also more expensive, harder to manage, and occasionally makes decisions you didn’t ask for. ... Workflows are orchestrated. You write the logic: maybe retrieve context with a vector store, call a toolchain, then use the LLM to summarize the results. Each step is explicit. It’s like a recipe. If it breaks, you know exactly where it happened — and probably how to fix it. This is what most “RAG pipelines” or prompt chains are. Controlled. Testable. Cost-predictable. The beauty? You can debug them the same way you debug any other software. Stack traces, logs, fallback logic. If the vector search fails, you catch it. If the model response is weird, you reroute it. ... Agents, on the other hand, are built around loops. The LLM gets a goal and starts reasoning about how to achieve it. It picks tools, takes actions, evaluates outcomes, and decides what to do next — all inside a recursive decision-making loop. ... You can’t just set a breakpoint and inspect the stack. The “stack” is inside the model’s context window, and the “variables” are fuzzy thoughts shaped by your prompts. When something goes wrong — and it will — you don’t get a nice red error message. 


Leveraging Credentials As Unique Identifiers: A Pragmatic Approach To NHI Inventories

Most teams struggle with defining NHIs. The canonical definition is simply "anything that is not a human," which is necessarily a wide set of concerns. NHIs manifest differently across cloud providers, container orchestrators, legacy systems, and edge deployments. A Kubernetes service account tied to a pod has distinct characteristics compared to an Azure managed identity or a Windows service account. Every team has historically managed these as separate concerns. This patchwork approach makes it nearly impossible to create a consistent policy, let alone automate governance across environments. ... Most commonly, this takes the form of secrets, which look like API keys, certificates, or tokens. These are all inherently unique and can act as cryptographic fingerprints across distributed systems. When used in this way, secrets used for authentication become traceable artifacts tied directly to the systems that generated them. This allows for a level of attribution and auditing that's difficult to achieve with traditional service accounts. For example, a short-lived token can be directly linked to a specific CI job, Git commit, or workload, allowing teams to answer not just what is acting, but why, where, and on whose behalf.


How Is AI Really Impacting Jobs In 2025?

Pessimists warn of potential mass unemployment leading to societal collapse. Optimists predict a new age of augmented working, making us more productive and freeing us to focus on creativity and human interactions. There are plenty of big-picture forecasts. One widely-cited WEF prediction claims AI will eliminate 92 million jobs while creating 170 million new, different opportunities. That doesn’t sound too bad. But what if you’ve worked for 30 years in one of the jobs that’s about to vanish and have no idea how to do any of the new ones? Today, we’re seeing headlines about jobs being lost to AI with increasing frequency. And, from my point of view, not much information about what’s being done to prepare society for this potentially colossal change. ... An exacerbating factor is that many of the roles that are threatened are entry-level, such as junior coders or designers, or low-skill, including call center workers and data entry clerks. This means there’s a danger that AI-driven redundancy will disproportionately hit economically disadvantaged groups. There’s little evidence so far that governments are prioritizing their response. There have been few clearly articulated strategies to manage the displacement of jobs or to protect vulnerable workers.


AGI vs. AAI: Grassroots Ingenuity and Frugal Innovation Will Shape the Future

One way to think of AAI is as intelligence that ships. Vernacular chatbots, offline crop-disease detectors, speech-to-text tools for courtrooms: examples of similar applications and products, tailored and designed for specific sectors, are growing fast. ... If the search for AGI is reminiscent of a cash-rich unicorn aiming for growth at all costs, then AAI is more scrappy. Like a bootstrapped startup that requires immediate profitability, it prizes tangible impact over long-term ambitions to take over the world. The aspirations—and perhaps the algorithms themselves—may be more modest. Still, the context makes them potentially transformative: if reliable and widely adopted, such systems could reach millions of users who have until now been on the margins of the digital economy. ... All this points to a potentially unexpected scenario, one in which the lessons of AI flow not along the usual contours of global geopolitics and economic power—but percolate rather upward, from the laboratories and pilot programs of the Global South toward the boardrooms and research campuses of the North. This doesn’t mean that the quest for AGI is necessarily misguided. It’s possible that AI may yet end up redefining intelligence.

Daily Tech Digest - March 17, 2025


Quote for the day:

"The leadership team is the most important asset of the company and can be its worst liability" -- Med Jones


Inching towards AGI: How reasoning and deep research are expanding AI from statistical prediction to structured problem-solving

There are various scenarios that could emerge from the near-term arrival of powerful AI. It is challenging and frightening that we do not really know how this will go. New York Times columnist Ezra Klein addressed this in a recent podcast: “We are rushing toward AGI without really understanding what that is or what that means.” For example, he claims there is little critical thinking or contingency planning going on around the implications and, for example, what this would truly mean for employment. Of course, there is another perspective on this uncertain future and lack of planning, as exemplified by Gary Marcus, who believes deep learning generally (and LLMs specifically) will not lead to AGI. Marcus issued what amounts to a take down of Klein’s position, citing notable shortcomings in current AI technology and suggesting it is just as likely that we are a long way from AGI. ... While each of these scenarios appears plausible, it is discomforting that we really do not know which are the most likely, especially since the timeline could be short. We can see early signs of each: AI-driven automation increasing productivity, misinformation that spreads at scale, eroding trust and concerns over disingenuous models that resist their guardrails. Each scenario would cause its own adaptations for individuals, businesses, governments and society.


AI in Network Observability: The Dawn of Network Intelligence

ML algorithms, trained on vast datasets of enriched, context-savvy network telemetry, can now detect anomalies in real-time, predict potential outages, foresee cost overruns, and even identify subtle performance degradations that would otherwise go unnoticed. Imagine an AI that can predict a spike in malicious traffic based on historical patterns and automatically trigger mitigations to block the attack and prevent disruption. That’s a straightforward example of the power of AI-driven observability, and it’s already possible today. But AI’s role isn’t limited to number crunching. GenAI is revolutionizing how we interact with network data. Natural language interfaces allow engineers to ask questions like: “What’s causing latency on the East Coast?” and receive concise, insightful answers. ... These aren’t your typical AI algorithms. Agentic AI systems possess a degree of autonomy, allowing them to make decisions and take actions within a defined framework. Think of them as digital network engineers, initially assisting with basic tasks but constantly learning and evolving, making them capable of handling routine assignments, troubleshooting fundamental issues, or optimizing network configurations.


Edge Computing and the Burgeoning IoT Security Threat

A majority of IoT devices come with wide-open default security settings. The IoT industry has been lax in setting and agreeing to device security standards. Additionally, many IoT vendors are small shops that are more interested in rushing their devices to market than in security standards. Another reason for the minimal security settings on IoT devices is that IoT device makers expect corporate IT teams to implement their own device settings. This occurs when IT professionals -- normally part of the networking staff -- manually configure each IoT device with security settings that conform with their enterprise security guidelines. ... Most IoT devices are not enterprise-grade. They might come with weak or outdated internal components that are vulnerable to security breaches or contain sub-components with malicious code. Because IoT devices are built to operate over various communication protocols, there is also an ever-present risk that they aren't upgraded for the latest protocol security. Given the large number of IoT devices from so many different sources, it's difficult to execute a security upgrade across all platforms. ... Part of the senior management education process should be gaining support from management for a centralized RFP process for any new IT, including edge computing and IoT. 


Data Quality Metrics Best Practices

While accuracy, consistency, and timeliness are key data quality metrics, the acceptable thresholds for these metrics to achieve passable data quality can vary from one organization to another, depending on their specific needs and use cases. There are a few other quality metrics, including integrity, relevance, validity, and usability. Depending on the data landscape and use cases, data teams can select the most appropriate quality dimensions to measure. ... Data quality metrics and data quality dimensions are closely related, but aren’t the same. The purpose, usage, and scope of both concepts vary too. Data quality dimensions are attributes or characteristics that define data quality. On the other hand, data quality metrics are values, percentages, or quantitative measurements of how well the data meets the above characteristics. A good analogy to explain the differences between data quality metrics and dimensions would be the following: Consider data quality dimensions as talking about a product’s attributes – it’s durable, long-lasting, or has a simple design. Then, data quality metrics would be how much it weighs, how long it lasts, and the like. ... Every solution starts with a problem. Identify the pressing concerns – missing records, data inconsistencies, format errors, or old records. What is it that you are trying to solve? 


How to Modernize Legacy Systems with Microservices Architectures

Scalability and agility are two significant benefits of a microservices architecture. With monolithic applications, it's difficult to isolate and scale distinct application functions under variable loads. Even if a monolithic application is scaled to meet increased demand, it could take months of time and capital to reach the end goal. By then, the demand might have changed —or disappear altogether — and the application will waste resources, bogging down the larger operating system. ... microservices architectures make applications more resilient. Because monolithic applications function on a single codebase, a single error during an update or maintenance can create large-scale problems. Microservices-based applications, however, work around this issue. Because each function runs on its own codebase, it's easier to isolate and fix problems without disrupting the rest of the application's services. ... Microservices might seem like a one-size-fits-all, no-downsides approach to modernizing legacy systems, but the first step to any major system migration is to understand the pros and cons. No major project comes without challenges, and migrating to microservices is no different. For instance, personnel might be resistant to changes associated with microservices. 


Elevating Employee Experience: Transforming Recognition with AI

AI’s ability to analyse patterns in behaviour, performance, and preferences enables organisations to offer personalised recognition that resonates with employees. AI-driven platforms provide real-time insights to leaders, ensuring that appreciation is timely, equitable, and free from unconscious biases. ... Burnout remains a critical challenge in today’s workplace, especially as workloads intensify and hybrid models blur work-life boundaries. With 84% of recognised employees being less likely to experience burnout, AI-driven recognition programs offer a proactive approach to employee well-being. Candy pointed out that AI can monitor engagement levels, detect early signs of burnout, and prompt managers to step in with meaningful appreciation. By tracking sentiment analysis, workload patterns, and feedback trends, AI helps HR teams intervene before burnout escalates. “Recognition isn’t just about celebrating big milestones; it’s about appreciating daily efforts that often go unnoticed. AI helps ensure no contribution is left behind, reinforcing a culture of continuous encouragement and support,” remarked Candy Fernandez. Arti Dua expanded on this, explaining that AI can help create customised recognition strategies that align with employees’ stress levels and work patterns, ensuring appreciation is both timely and impactful.


11 surefire ways to fail with AI

“The fastest way to doom an AI initiative? Treat it as a tech project instead of a business transformation,” Pallath says. “AI doesn’t function in isolation — it thrives on human insight, trust, and collaboration.” The assumption that just providing tools will automatically draw users is a costly myth, Pallath says. “It has led to countless failed implementations where AI solutions sit unused, misaligned with actual workflows, or met with skepticism,” he says. ... Without a workforce that embraces AI, “achieving real business impact is challenging,” says Sreekanth Menon, global leader of AI/ML at professional services and solutions firm Genpact. “This necessitates leadership prioritizing a digital-first culture and actively supporting employees through the transition.” To ease employee concerns about AI, leaders should offer comprehensive AI training across departments, Menon says. ... AI isn’t a one-time deployment. “It’s a living system that demands constant monitoring, adaptation, and optimization,” Searce’s Pallath says. “Yet, many organizations treat AI as a plug-and-play tool, only to watch it become obsolete. Without dedicated teams to maintain and refine models, AI quickly loses relevance, accuracy, and business impact.” Market shifts, evolving customer behaviors, and regulatory changes can turn a once-powerful AI tool into a liability, Pallath says.


Now Is the Time to Transform DevOps Security

Traditionally, security was often treated as an afterthought in the software development process, typically placed at the end of the development cycle. This approach worked when development timelines were longer, allowing enough time to tackle security issues. As development speeds have increased, however, this final security phase has become less feasible. Vulnerabilities that arise late in the process now require urgent attention, often resulting in costly and time-intensive fixes. Overlooking security in DevOps can lead to data breaches, reputational damage, and financial loss. Delays increase the likelihood of vulnerabilities being exploited. As a result, companies are rethinking how security should be embedded into their development processes. ... Significant challenges are associated with implementing robust security practices within DevOps workflows. Development teams often resist security automation because they worry it will slow delivery timelines. Meanwhile, security teams get frustrated when developers bypass essential checks in the name of speed. Overcoming these challenges requires more than just new tools and processes. It's critical for organizations to foster genuine collaboration between development and security teams by creating shared goals and metrics. 


AI development pipeline attacks expand CISOs’ software supply chain risk

Malicious software supply chain campaigns are targeting development infrastructure and code used by developers of AI and large language model (LLM) machine learning applications, the study also found. ... Modern software supply chains rely heavily on open-source, third-party, and AI-generated code, introducing risks beyond the control of software development teams. Better controls over the software the industry builds and deploys are required, according to ReversingLabs. “Traditional AppSec tools miss threats like malware injection, dependency tampering, and cryptographic flaws,” said ReversingLabs’ chief trust officer SaÅ¡a Zdjelar. “True security requires deep software analysis, automated risk assessment, and continuous verification across the entire development lifecycle.” ... “Staying on top of vulnerable and malicious third-party code requires a comprehensive toolchain, including software composition analysis (SCA) to identify known vulnerabilities in third-party software components, container scanning to identify vulnerabilities in third-party packages within containers, and malicious package threat intelligence that flags compromised components,” Meyer said.


Data Governance as an Enabler — How BNY Builds Relationships and Upholds Trust in the AI Era

Governance is like bureaucracy. A lot of us grew up seeing it as something we don’t naturally gravitate toward. It’s not something we want more of. But we take a different view, governance is enabling. I’m responsible for data governance at Bank of New York. We operate in a hundred jurisdictions, with regulators and customers around the world. Our most vital equation is the trust we build with the world around us, and governance is what ensures we uphold that trust. Relationships are our top priority. What does that mean in practice? It means understanding what data can be used for, whose data it is, where it should reside, and when it needs to be obfuscated. It means ensuring data security. What happens to data at rest? What about data in motion? How are entitlements managed? It’s about defining a single source of truth, maintaining data quality, and managing data incidents. All of that is governance. ... Our approach follows a hub-and-spoke model. We have a strong central team managing enterprise assets, but we've also appointed divisional data officers in each line of business to oversee local data sets that drive their specific operations. These divisional data officers report to the enterprise data office. However, they also have the autonomy to support their business units in a decentralized manner.