Showing posts with label audit. Show all posts
Showing posts with label audit. Show all posts

Daily Tech Digest - February 24, 2026


Quote for the day:

"Transparent reviews create fairness. Subjective reviews create frustration." -- Gordon Tredgold



AI agents and bad productivity metrics

The great promise of generative artificial intelligence was that it would finally clear our backlogs. Coding agents would churn out boilerplate at superhuman speeds, and teams would finally ship exactly what the business wants. The reality, as we settle into 2026, is far more uncomfortable. Artificial intelligence is not going to save developer productivity because writing code was never the bottleneck in software engineering. ... For decades, one of the most common debugging techniques was entirely social. A production alert goes off. You look at the version control history, find the person who wrote the code, ask them what they were trying to accomplish, and reconstruct the architectural intent. But what happens to that workflow when no one actually wrote the code? What happens when a human merely skimmed a 3,000-line agent-generated pull request, hit merge, and moved on to the next ticket? When an incident happens, where is the deep knowledge that used to live inside the author? ... The metrics that matter are still the boring ones because they measure actual business outcomes. The DORA metrics remain the best sanity check we have because they tie delivery speed directly to system stability. They measure deployment frequency, lead time for changes, change failure rate, and time to restore service. None of those metrics cares about the number of commits your agents produced today. They only care about whether your system can absorb change without breaking.


How vertical SaaS is redefining enterprise efficiency

For the past decade, horizontal SaaS has been the defining force in enterprise technology. Platforms like CRMs, ERP suites and collaboration tools promised universality, offering a single platform to manage every business function across all industries. The strategy made sense: a large total addressable market, reusable architecture and marketing scale. Vertical SaaS flips that model. It is narrow by design but deep in impact. A report by Strategy& found that B2B vertical software companies are now growing faster than their horizontal peers, thanks to higher retention rates, lower churn rates and better unit economics. When software mirrors how a business already works, people stop treating it like a tool they tolerate and start relying on it like infrastructure. ... In regulated industries, compliance isn’t a feature; it’s the baseline for trust. I learned early that trying to retrofit audit trails or data retention policies after go-live only creates technical debt. Instead, design for compliance as a first-class product layer: immutable logs, permission hierarchies and exportable compliance reports built into the system. ... Vertical products don’t thrive in isolation. Integration with industry hardware, marketplaces and regulatory systems drives adoption. In one case, we partnered with a hardware vendor to automatically sync manifest data from their devices, cutting onboarding time in half and unlocking co-marketing opportunities.


API Security Standards: 10 Essentials to Get You Started

Most API security flaws are created during the design phase. You're too late if you're waiting until deployment to think about threats. Shift-left principles mean integrating security early, especially at the design phase, where flawed assumptions become future exploits. Start by mapping out each endpoint's purpose, what data it touches, and who should access it. Identify where trust is assumed (not earned), roles blur, and inputs aren't validated. ... Every API has a breaking point. If you don't define it, attackers will. Rate limiting and throttling prevent denial-of-service (DoS) attacks, and they're also your first defense against scraping, brute-forcing, enumeration, and even accidental misuse by poorly built integrations. APIs, by nature, invite automation. Without guardrails, that openness turns into a floodgate. And in some cases, unchecked abuse opens the door to far worse issues, like remote code execution, where improperly scoped input or lack of throttling leads directly to exploitation. ... APIs are built to accept input. Attackers find ways to exploit it. The core rule is this - if you didn't expect it, don't process it. If you didn't define it, don't send it. Define request and response schemas explicitly using tools like OpenAPI or JSON Schema, as recommended by leading API security standards. Then enforce them — at the gateway, app layer, or both. Don't just use validation as linting; treat it as a runtime contract. If the payload doesn't match the spec, reject it.


Why AI Urgency Is Forcing a Data Governance Reset

The cost of weak governance shows up in familiar ways: teams can’t find data, requirements arrive late in the process, and launches stall when compliance realities collide with product timelines. Without governance, McQuillan argues, organizations “ultimately suffer from higher cost basis,” with downstream consequences that “impact the bottom line.” ... McQuillan sees a clear step-change in executive urgency since generative AI (GenAI) became mainstream. “There’s been a rapid adoption, particularly since the advent of GenAI and the type of generative and agentic technologies that a lot of C-suites are taking on,” he says. But he also describes a common leadership gap: many executives feel pressure to become “AI-enabled” without a clear definition of what that means or how to build it sustainably. “There’s very much a well-understood need across all companies to become AI-enabled in some way,” he says. “But the problem is a lot of folks don’t necessarily know how to define that.” In the absence of clarity, organizations often fall into scattershot experimentation. What concerns McQuillan the most is how the pace of the “race” shapes priorities. ... When asked whether the long-running mantra “data is the new oil” still holds in the era of large language models and agentic workflows, McQuillan is direct. “It holds true now more than ever,” he says. He acknowledges why attention drifts: “It’s natural for people to gravitate toward things that are shiny,” and “AI in and of itself is an absolutely magnificent space.”


Building a Least-Privilege AI Agent Gateway for Infrastructure Automation with MCP, OPA, and Ephemeral Runners

An agent misinterpreting an instruction can initiate destructive infrastructure changes, such as tearing down environments or modifying production resources. A compromised agent identity can be abused to exfiltrate secrets, create unauthorized workloads, or consume resources at scale. In practice, teams often discover these issues late, because traditional logs record what happened, but not why an agent decided to act in the first place. For organizations, this liability creates operational and governance challenges. Incidents become harder to investigate, change approvals are bypassed unintentionally, and security teams are left with incomplete audit trails. Over time, this problem erodes trust in automation itself, forcing teams to either roll back agent usage or accept increasing levels of unmanaged risk. ... A more sustainable approach is to introduce an explicit control layer between agents and the systems they operate on. In this article, we focus on an AI Agent Gateway, a dedicated boundary that validates intent, enforces policy as code, and isolates execution before any infrastructure or service API is invoked. Rather than treating agents as privileged actors, this model treats them as untrusted requesters whose actions must be authorized, constrained, observed, and contained. ... In the context of AI-driven automation, defense in depth means that no single component, neither the agent, nor the gateway, nor the execution environment, has enough authority on its own to cause damage. 


Demystifying CERT‑In’s Elemental Cyber Defense Controls: A Guide for MSMEs

For India’s Micro, Small, and Medium Enterprises (MSMEs), cybersecurity is no longer a “big company problem.” With digital payments, SaaS adoption, cloud-first operations, and supply‑chain integrations becoming the norm, MSMEs are now prime targets for cyberattacks. To help these organizations build a strong foundational security posture, the Indian Computer Emergency Response Team (CERT-In) has released CIGU-2025-0003, outlining a baseline of Cyber Defense Controls, which prescribes 15 Elemental Cyber Security Controls—a pragmatic, baseline set of safeguards designed to uplift the nation’s cyber hygiene. ... These controls, mapped to 45 recommendations, enable essential digital hygiene, protect against ransomware, ensure regulatory compliance, and are required for annual audits. CERT‑In’s Elemental Controls are designed as minimum essential practices that every Indian organization—regardless of size—should implement. ... The CERT-In guidelines offer a simplified, actionable starting point for MSMEs to benchmark their security. These controls are intentionally prescriptive, unlike ISO or NIST, which are more framework‑oriented. ... Because threats constantly evolve and MSMEs face unique risks depending on their industry and data sensitivity, organizations should view this framework not as an endpoint, but as the first critical step toward building a comprehensive security program akin to ISO 27001 or NIST CSF 2.0.


AI-fuelled cyber attacks hit in minutes, warns CrowdStrike

CrowdStrike reports a sharp acceleration in cyber intrusions, with attackers moving from initial access to lateral movement in less than half an hour on average as widely available artificial intelligence tools become embedded in criminal workflows. Its latest Global Threat Report puts average eCrime "breakout time" at 29 minutes in 2025, a 65% improvement on the prior year. ... Alongside generative AI use in preparation and execution, the report describes attempts to exploit AI systems directly. Adversaries injected malicious prompts into GenAI tools at more than 90 organisations, using them to generate commands associated with credential theft and cryptocurrency theft. ... Incidents linked to North Korea rose more than 130%, while activity by the group CrowdStrike tracks as FAMOUS CHOLLIMA more than doubled. The report says DPRK-nexus actors used AI-generated personas to scale insider operations. It also cites a large cryptocurrency theft attributed to the actor it calls PRESSURE CHOLLIMA, valued at USD $1.46 billion and described as the largest single financial heist ever reported. The report also references AI-linked tooling used by other state and criminal groups. Russia-nexus FANCY BEAR deployed LLM-enabled malware, which it named LAMEHUG, for automated reconnaissance and document collection. The eCrime actor tracked as PUNK SPIDER used AI-generated scripts to speed up credential dumping and erase forensic evidence.


Shadow mode, drift alerts and audit logs: Inside the modern audit loop

When systems moved at the speed of people, it made sense to do compliance checks every so often. But AI doesn't wait for the next review meeting. The change to an inline audit loop means audits will no longer occur just once in a while; they happen all the time. Compliance and risk management should be "baked in" to the AI lifecycle from development to production, rather than just post-deployment. This means establishing live metrics and guardrails that monitor AI behavior as it occurs and raise red flags as soon as something seems off. ... Cultural shift is equally important: Compliance teams must act less like after-the-fact auditors and more like AI co-pilots. In practice, this might mean compliance and AI engineers working together to define policy guardrails and continuously monitor key indicators. With the right tools and mindset, real-time AI governance can “nudge” and intervene early, helping teams course-correct without slowing down innovation. In fact, when done well, continuous governance builds trust rather than friction, providing shared visibility into AI operations for both builders and regulators, instead of unpleasant surprises after deployment. ... Shadow mode is a way to check compliance in real time: It ensures that the model handles inputs correctly and meets policy standards before it is fully released. One AI security framework showed how this method worked: Teams first ran AI in shadow mode, then compared AI and human inputs to determine trust. 


Making AI Compliance Practical: A Guide for Data Teams Navigating Risk, Regulation, and Reality

As AI tools become more embedded in enterprise workflows, data teams are encountering a growing reality: compliance isn’t only a legal concern but also a design constraint, a quality signal, and, often, a competitive differentiator. But navigating compliance can feel complex, especially for teams focused on building and shipping. What is the good news? It doesn’t have to be. When approached intentionally, compliance becomes a pathway to better decisions, not a barrier. ... Automation can help with regulations, but only if it's used correctly. I've looked at a tool before that used algorithms to find private information. It worked well with English, but when tested with material in more than one language, it missed a few personal identifiers. The group thought it was "smart enough." It wasn't. We kept the automation, but we added human review for rare cases, confidence levels to make checks happen, and alerts for input formats that aren't common. The automation stayed the same, but there were built-in checks and balances. ... The biggest compliance failures don’t come from bad people. They come from good teams moving fast, skipping hard questions, and assuming nothing will go wrong. But compliance isn’t a blocker. It’s a product quality signal. People will trust you more if they are aware that your team has carefully considered the details.


Tata Communications’ Andrew Winney on why SASE is now non-negotiable

Zero Trust is often discussed as a product decision, but in reality it is a journey. Many enterprises start with a few use cases, such as securing internet access or enabling remote access to private applications. But they do not always extend those principles across contractors, third-party users, software-as-a-service applications and hybrid environments. Practical Zero Trust requires enterprises to rethink access fundamentally. Every request must be evaluated based on who the user is, the context from which they are accessing, the device they are using and the resource they are requesting. Access must then be granted only to that specific resource. ... Secure Access Service Edge represents a structural convergence of networking and security rather than a simple technology swap. What are the most critical architectural and change-management considerations enterprises must address during this transition? SASE is not a one-time technology change. It represents the convergence of networking and security under unified orchestration and policy management. That transition takes time and must be managed carefully. We typically work with enterprises through phased transition plans. If an organisation’s immediate priority is securing internet access or private application access for remote users, we begin there and expand to additional use cases over time. Integration is critical. Enterprises have existing investments in cloud platforms, local area networks and security tools. 

Daily Tech Digest - July 26, 2025


Quote for the day:

"Small daily improvements over time lead to stunning results." -- Robin Sharma


Data Engineering in the Age of AI: Skills To Master Now

Streaming requires a new mindset. You must reason about event time compared to processing time, manage watermarking and windowing and guarantee exactly-once semantics even when things change midstream. These design patterns must be built into your pipelines from the beginning. ... Agentic AI stretches the typical data engineer’s streaming data skill set because it is no longer about a single model running in isolation. Today, we see networks of perception agents, reasoning agents and execution agents working together, each handling tasks and passing insights to the next in real time. If you know only how to schedule batch ETL jobs or deploy an inference server, you’re missing a core skill: how to build high-throughput, low-latency pipelines that keep these agents reliable and responsive in production. ... A single slow or broken stream can cause cascading failures in multiagent systems. Use schema registries, enforce data contracts and apply exactly-once semantics to maintain trust in your streaming infrastructure. ... Communication presents another challenge. Data scientists often discuss “precision” as a metric that data engineers must translate into reality. Implement evaluation scores like factual consistency checks, entity precision comparisons and human-in-the-loop review pipelines.


Anthropic unveils ‘auditing agents’ to test for AI misalignment

Anthropic said the first environment it developed tests an agent’s ability to complete an alignment audit for an intentionally misaligned model. For this test, the researchers brought the investigator agent to the environment and equipped it with similar tools that human teams use. The researchers instructed the agent to identify the root cause of the misalignment and determine the specific behaviors that were misaligned. According to the researchers, the investigator agent successfully found the root cause of the issues 10-13% of the time. Still, this performance improved to 42% “when using a super-agent approach that aggregates findings across multiple investigations.” ... The next step in most alignment audits is typically a behavioral evaluation, which determines whether the model consistently misbehaves in various settings. To test its evaluation agent, the researchers designed 10 models “given system prompt instructions to exhibit a specific behavior, such as excessive deference to the user.” They ran the agent five times per model and saw that the agent correctly finds and flags at least one quirk of the model. However, the agent sometimes failed to identify specific quirks consistently. It had trouble evaluating subtle quirks, such as self-promotion and research-sandbagging, as well as quirks that are difficult to elicit, like the Hardcode Test Cases quirk.


The agentic experience: Is MCP the right tool for your AI future?

As enterprises race to operationalize AI, the challenge isn't only about building and deploying large language models (LLMs), it's also about integrating them seamlessly into existing API ecosystems while maintaining enterprise level security, governance, and compliance. Apigee is committed to lead you in this journey. Apigee streamlines the integration of gen AI agents into applications by bolstering their security, scalability, and governance. While the Model Context Protocol (MCP) has emerged as a de facto method of integrating discrete APIs as tools, the journey of turning your APIs into these agentic tools is broader than a single protocol. This post highlights the critical role of your existing API programs in this evolution and how ... Leveraging MCP services across a network requires specific security constraints. Perhaps you would like to add authentication to your MCP server itself. Once you’ve authenticated calls to the MCP server you may want to authorize access to certain tools depending on the consuming application. You may want to provide first class observability information to track which tools are being used and by whom. Finally, you may want to ensure that whatever downstream APIs your MCP server is supplying tools for also has minimum guarantees of security like already outlined above


AI Innovation: 4 Steps For Enterprises To Gain Competitive Advantage

A skill is a single ability, such as the ability to write a message or analyze a spreadsheet and trigger actions from that analysis. An agent independently handles complex, multi-step processes to produce a measurable outcome. We recently announced an expanded network of Joule Agents to help foster autonomous collaboration across systems and lines of business. This includes out-of-the-box agents for HR, finance, supply chain, and other functions that companies can deploy quickly to help automate critical workflows. AI front-runners, such as Ericsson, Team Liquid, and Cirque du Soleil, also create customized agents that can tackle specific opportunities for process improvement. Now you can build them with Joule Studio, which provides a low-code workspace to help design, orchestrate, and manage custom agents using pre-defined skills, models, and data connections. This can give you the power to extend and tailor your agent network to your exact needs and business context. ... Another way to become an AI front-runner is to tackle fragmented tools and solutions by putting in place an open, interoperable ecosystem. After all, what good is an innovative AI tool if it runs into blockers when it encounters your other first- and third-party solutions? 


Hard lessons from a chaotic transformation

The most difficult part of this transformation wasn’t the technology but getting people to collaborate in new ways, which required a greater focus on stakeholder alignment and change management. So my colleague first established a strong governance structure. A steering committee with leaders from key functions like IT, operations, finance, and merchandising met biweekly to review progress and resolve conflicts. This wasn’t a token committee, but a body with authority. If there were any issues with data exchange between marketing and supply chain, they were addressed and resolved during the meetings. By bringing all stakeholders together, we were also able to identify discrepancies early on. For example, when we discovered a new feature in the inventory system could slow down employee workflows, the operations manager reported it, and we immediately adjusted the rollout plan. Previously, such issues might not have been identified until after the full rollout and subsequent finger-pointing between IT and business departments. The next step was to focus on communication and culture. From previous failed projects, we knew that sending a few emails wasn’t enough, so we tried a more personal approach. We identified influential employees in each department and recruited them as change champions.


Benchmarks for AI in Software Engineering

HumanEval and SWE-bench have taken hold in the ML community, and yet, as indicated above, neither is necessarily reflective of LLMs’ competence in everyday software engineering tasks. I conjecture one of the reasons is the differences in points of view of the two communities! The ML community prefers large-scale, automatically scored benchmarks, as long as there is a “hill climbing” signal to improve LLMs. The business imperative for LLM makers to compete on popular leaderboards can relegate the broader user experience to a secondary concern. On the other hand, the software engineering community needs benchmarks that capture specific product experiences closely. Because curation is expensive, the scale of these benchmarks is sufficient only to get a reasonable offline signal for the decision at hand (A/B testing is always carried out before a launch). Such benchmarks may also require a complex setup to run, and sometimes are not automated in scoring; but these shortcomings can be acceptable considering a smaller scale. For exactly these reasons, these are not useful to the ML community. Much is lost due to these different points of view. It is an interesting question as to how these communities could collaborate to bridge the gap between scale and meaningfulness and create evals that work well for both communities.


Scientists Use Cryptography To Unlock Secrets of Quantum Advantage

When a quantum computer successfully handles a task that would be practically impossible for current computers, this achievement is referred to as quantum advantage. However, this advantage does not apply to all types of problems, which has led scientists to explore the precise conditions under which it can actually be achieved. While earlier research has outlined several conditions that might allow for quantum advantage, it has remained unclear whether those conditions are truly essential. To help clarify this, researchers at Kyoto University launched a study aimed at identifying both the necessary and sufficient conditions for achieving quantum advantage. Their method draws on tools from both quantum computing and cryptography, creating a bridge between two fields that are often viewed separately. ... “We were able to identify the necessary and sufficient conditions for quantum advantage by proving an equivalence between the existence of quantum advantage and the security of certain quantum cryptographic primitives,” says corresponding author Yuki Shirakawa. The results imply that when quantum advantage does not exist, then the security of almost all cryptographic primitives — previously believed to be secure — is broken. Importantly, these primitives are not limited to quantum cryptography but also include widely-used conventional cryptographic primitives as well as post-quantum ones that are rapidly evolving.


It’s time to stop letting our carbon fear kill tech progress

With increasing social and regulatory pressure, reluctance by a company to reveal emissions is ill-received. For example, in Europe the Corporate Sustainability Reporting Directive (CSRD) currently requires large businesses to publish their emissions and other sustainability datapoints. Opaque sustainability reporting undermines environmental commitments and distorts the reference points necessary for net zero progress. How can organisations work toward a low-carbon future when its measurement tools are incomplete or unreliable? The issue is particularly acute regarding Scope 3 emissions. Scope 3 emissions often account for the largest share of a company’s carbon footprint and are those generated indirectly along the supply chain by a company’s vendors, including emissions from technology infrastructure like data centres. ... It sounds grim, but there is some cause for optimism. Most companies are in a better position than they were five years ago and acknowledge that their measurement capabilities have improved. We need to accelerate the momentum of this progress to ensure real action. Earth Overshoot Day is a reminder that climate reporting for the sake of accountability and compliance only covers the basics. The next step is to use emissions data as benchmarks for real-world progress.


Why Supply Chain Resilience Starts with a Common Data Language

Building resilience isn’t just about buying more tech, it’s about making data more trustworthy, shareable, and actionable. That’s where global data standards play a critical role. The most agile supply chains are built on a shared framework for identifying, capturing, and sharing data. When organizations use consistent product and location identifiers, such as GTINs (Global Trade Item Numbers) and GLNs (Global Location Numbers) respectively, they reduce ambiguity, improve traceability, and eliminate the need for manual data reconciliation. With a common data language in place, businesses can cut through the noise of siloed systems and make faster, more confident decisions. ... Companies further along in their digital transformation can also explore advanced data-sharing standards like EPCIS (Electronic Product Code Information Services) or RFID (radio frequency identification) tagging, particularly in high-volume or high-risk environments. These technologies offer even greater visibility at the item level, enhancing traceability and automation. And the benefits of this kind of visibility extend far beyond trade compliance. Companies that adopt global data standards are significantly more agile. In fact, 58% of companies with full standards adoption say they manage supply chain agility “very well” compared to just 14% among those with no plans to adopt standards, studies show.


Opinion: The AI bias problem hasn’t gone away you know

When we build autonomous systems and allow them to make decisions for us, we enter a strange world of ethical limbo. A self-driving car forced to make a similar decision to protect the driver or a pedestrian in a case of a potentially fatal crash will have much more time than a human to make its choice. But what factors influence that choice? ... It’s not just the AI systems shaping the narrative, raising some voices while quieting others. Organisations made up of ordinary flesh-and-blood people are doing it too. Irish cognitive scientist Abeba Birhane, a highly-regarded researcher of human behaviour, social systems and responsible and ethical artificial intelligence was asked to give a keynote recently for the AI for Good Global Summit. According to her own reports on Bluesky, a meeting was requested just hours before presenting her keynote: “I went through an intense negotiation with the organisers (for over an hour) where we went through my slides and had to remove anything that mentions ‘Palestine’ ‘Israel’ and replace ‘genocide’ with ‘war crimes’…and a slide that explains illegal data torrenting by Meta, I also had to remove. In the end, it was either remove everything that names names (Big Tech particularly) and remove logos, or cancel my talk.” 

Daily Tech Digest - March 13, 2025


Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein


Becoming an AI-First Organization: What CIOs Must Get Right

"The three pillars of an AI-first organization are data, infrastructure and people. Data must be treated as a strategic asset with robust quality, privacy and security standards," Simha said. Along with responsible AI, responsible data management is equally crucial. When implemented effectively, data privacy, regulatory compliance, bias and security do not pose issues to an AI-first organization. Yeo described the AI-first approach as both a journey and a destination. "Just using AI tools doesn't make you AI-first. Organizations must explore AI's full potential." He compared today's AI evolution to the early days of the internet. "Decades ago, businesses knew they had to go online but didn't know how. Now, if you're not online, you're obsolete. AI is following the same trajectory - it will soon be indispensable for business success." ... Simha stressed the importance of enterprise architecture in AI deployment. "AI success depends on how well data flows across an organization. Organizations must select the right architecture patterns - real-time data processing requires a Kappa architecture, while periodic reporting benefits from a Lambda approach. A well-designed data foundation is crucial," Simha said. As AI adoption grows, ethical concerns and regulatory compliance remain critical considerations. 


From Box-Ticking to Risk-Tackling: Evolving Your GRC Beyond Audits

The problem, though, is that merely passing an audit does not necessarily mean a business is doing all it can to mitigate its risks. On their own, audits can fall short of driving full GRC maturity for several reasons ... Auditors are generally outsiders to the businesses they audit — which is good in the sense that it makes them objective evaluators. But it can also lead to situations where they have a limited understanding of what's really going on within a company's GRC practices and are beholden to the information provided by the company's team members on the other side of the assessment table. They may not ask the questions needed to gain adequate understanding to assess and find gaps, ultimately overlooking pitfalls that only insiders know about, and which would become obvious only following a higher degree of scrutiny than a standardized audit. ... But for companies that have made advanced GRC investments, such as automations that pull data from across a diverse set of disparate systems, deeper scrutiny will help validate the value that these investments have created. It may also uncover risk management weak points that the business is overlooking, allowing it to strengthen its GRC program even further. It's generally OK, by the way, if your business submits itself to a high degree of risk management scrutiny, only to fail the assessment because its controls are not as robust as it expected. 


How to use ChatGPT to write code - and my favorite trick to debug what it generates

After repeated tests, it became clear that if you ask ChatGPT to deliver a complete application, the tool will fail. A corollary to this observation is that if you know nothing about coding and want ChatGPT to build something, it will fail. Where ChatGPT succeeds -- and does so very well -- is in helping someone who already knows how to code to build specific routines and get tasks done. Don't ask for an app that runs on the menu bar. But if you ask ChatGPT for a routine to put a menu on the menu bar, and paste that into your project, the tool will do quite well. Also, remember that, while ChatGPT appears to have a tremendous amount of domain-specific knowledge (and often does), it lacks wisdom. As such, the tool may be able to write code, but it won't be able to write code containing the nuances for specific or complex problems that require deep experience. Use ChatGPT to demo techniques, write small algorithms, and produce subroutines. You can even get ChatGPT to help you break down a bigger project into chunks, and then you can ask it to help you code those chunks. ... But you can do several things to help refine your code, debug problems, and anticipate errors that might crop up. My favorite new AI-enabled trick is to feed code to a different ChatGPT session (or a different chatbot entirely) and ask, "What's wrong with this code?"


How AI-enabled ‘bossware’ is being used to track and evaluate your work

Employee monitoring tools can increase efficiency with features such as facial recognition, predictive analytics, and real-time feedback for workers, allowing them to better prioritize tasks and even prevent burnout. When AI is added, the software can be used to track activity patterns, flag unusual behavior, and analyze communication for signs of stress or dissatisfaction, according to analysts and industry experts. It also generates productivity reports, classifies activities, and detects policy violations. ... LLMs are often used in predicting employee behaviors, including the risk of quitting, unionizing, or other actions, Moradi said. However, their role is mostly in analyzing personal communications, such as emails or messages. That can be tricky, because interpreting messages across different people can lead to incorrect inferences about someone’s job performance. “If an algorithm causes someone to be laid off, legal recourse for bias or other issues with the decision-making process is unclear, and it raises important questions about accountability in algorithmic decisions,” she said. The problem, Moradi explained, is that while AI can make bossware more efficient and insightful, the data being collected by LLMs is obfuscated. “So, knowing the way that these decisions [like layoffs] are made are obscured by these, like, black boxes,” Moradi said.


Attackers Can Manipulate AI Memory to Spread Lies

By crafting a series of seemingly innocuous prompts, an attacker can insert misleading data into an AI agent's memory bank, which the model later relies on to answer unrelated queries from other users. Researchers tested Minja on three AI agents developed on top of OpenAI's GPT-4 and GPT-4o models. These include RAP, a ReAct agent with retrieval-augmented generation that integrates past interactions into future decision-making for web shops; EHRAgent, a medical AI assistant designed to answer healthcare queries; and QA Agent, a custom-built question-answering model that reasons using Chain of Thought and is augmented by memory. A Minja attack on the EHRAgent caused the model to misattribute patient records, associating one patient's data with another. In the RAP web shop experiment, a Minja attack tricked the AI into recommending the wrong product, steering users searching for toothbrushes to a purchase page for floss picks. The QA Agent fell victim to manipulated memory prompts, producing incorrect answers to multiple-choice questions based on poisoned context. Minja operates in stages. An attacker interacts with an AI agent by submitting prompts that contain misleading contextual information. Referred to as indication prompts, they appear to be legitimate but contain subtle memory-altering instructions. 


CISOs, are your medical devices secure? Attackers are watching closely

“To truly manage and prioritize risks, organizations need to look beyond technical scores and consider contextual risk factors that impact operations related to patient care. This can include identifying devices in critical care areas, legacy devices close to or past their end-of-life status, where any insecure communication protocols are, and how sensitive personal information is being stored,” Greenhalgh added. ... “For CISOs, the priority should be proactive engagement. First, implement real-time vulnerability tracking and ensure security patches can be deployed quickly without disrupting device functionality. Medical device security must be continuous—not just a checkpoint during development or regulatory submission. Second, regulatory alignment isn’t a one-time effort. The FDA now expects ongoing vulnerability monitoring, coordinated disclosure policies, and robust software patching strategies. Automating security processes—whether for SBOM (Software Bill of Materials) management, dependency tracking, or compliance reporting—reduces human error and improves response times. An SBOM is valuable not just for compliance but as a tool for tracking and mitigating vulnerabilities throughout a device’s lifecycle,” Ken Zalevsky, CEO of Vigilant Ops explained.


Can AI Teach You Empathy?

By leveraging AI-driven insights, banks can tailor their training programs to address specific skill gaps and enhance employee development. However, AI isn’t infallible, and it’s crucial for banks to implement tools that not only support learning but also foster a reliable and effective training environment. Striking the right balance between AI-driven training and human oversight ensures that these tools enhance employee growth without compromising accuracy or effectiveness. ... Experiential learning has long been a cornerstone of learning and development. Students, for example, who participate in experiential learning often develop a deeper understanding of the material and achieve statistically better outcomes than those who do not. While AI may not perfectly replicate a customer’s response, it provides new employees with a valuable opportunity to practice handling complex issues before interacting with real customers. AI-powered versions of these trainings can make it more accessible, allowing more employees to benefit. ... Many employees find it challenging to incorporate AI into their daily tasks and may need guidance to understand its value, especially in managing customer interactions. Some may also be resistant, fearing that AI could eventually replace their jobs, Huang says.


The Missing Piece in Platform Engineering: Recognizing Producers

The evolution of technology has shown us time and again that those who innovate are the ones who shape the future. Alan Kay’s words resonate strongly in the modern era, where software, artificial intelligence, and digital transformation continue to drive change across industries. ... “A Platform is a curated experience for engineers (the platform’s customers)” is a quote from the Team Topologies book. It is excellent and doesn’t contradict the platform business way of thinking, but it only calls out one side of the producer/consumer model. This is precisely the trap I fell into. When I worked with platform builders, we focused almost entirely on the application teams that consumed platform services. We rapidly became the blocker to those teams, just like the SRE and DevOps teams that came before us. We couldn’t onboard capabilities and features fast enough, meaning we were supporting the old ways while trying to build the new. ... Chris Plank, Enterprise Architect at NatWest, discusses this in our interview for his Platform Engineering Day talk: “We have since been set four challenges by leadership that I talk about: do things faster, do things simpler, enable inner sourcing, and deliver centralized capabilities in a self-service way… Our inner sourcing model will allow us to have multiple teams working on our platform… They are empowered to start contributing changes.”


Data Centers in Space: Separating Fact from Science Fiction

Among the many reasons for interest in orbital data centers is the potential for improved sustainability. However, the definition of a data center in space remains fluid, shaped by current technological limitations and evolving industry perspectives. Lonestar Data Holdings chairman and CEO Christopher Slott told Data Center Knowledge that his firm works from the definitions of a data center from industry standards bodies including the Uptime Institute and the Building Industry Consulting Service International (BICSI). ... Axiom Space plans to deploy larger ODC infrastructure in the coming years that are more similar to terrestrial data centers in terms of utility and capacity. The goal is to develop and operationalize terrestrial-grade cloud regions in low-Earth orbit (LEO). ... James noted that space presents the ultimate edge computing challenge – limited bandwidth, extreme conditions, and no room for failure. “To ensure resilience and autonomy, the platform incorporates automated rollbacks and self-healing capabilities through delta updates and health monitoring,” James said. ... With the Axiom Space deployment, the initial workloads will be small but scalable to the much larger ODC infrastructure that the company plans to deploy in the coming years. “Red Hat Device Edge enables secure, low-latency data processing directly on the ISS, allowing applications to run where the data is being generated,” James said. 


CISA cybersecurity workforce faces cuts amid shifting US strategy

Analysts suggest these layoffs and funding cuts indicate a broader strategic shift in the U.S. government’s cybersecurity approach. Neil Shah, VP at Counterpoint Research, sees both risks and opportunities in the restructuring. “In the near to mid-term, this could weaken the US cybersecurity infrastructure. However, with AI proliferating, the US government likely has a Plan B — potentially shifting toward privatized cybersecurity infrastructure projects, similar to what we’re seeing with Project Stargate for AI,” Shah said. “If these gaps aren’t filled with viable alternatives, vulnerabilities could escalate from small-scale exploits to large-scale cyber incidents at state or federal levels. Signs point to a broader cybersecurity strategy reboot, with funding likely being redirected toward more efficient and sophisticated players rather than a purely vertical, government-led approach.” While some fear heightened risks, others argue the shift could lead to more tech-driven solutions. Faisal Kawoosa, founder and lead analyst at Techarc, views the move as part of a larger digital transformation. “Elon Musk’s role is not just about cost-cutting but also about leveraging technology to create more efficient systems,” Kawoosa said. “DOGE operates as a digital transformation program for US governance, exploring tech-first approaches to achieving similar or better results.”

Daily Tech Digest - March 05, 2025


Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel


Zero-knowledge cryptography is bigger than web3

Zero-knowledge proofs have existed since the 1980s, long before the advent of web3. So why limit their potential to blockchain applications? Traditional companies can—and should—adopt ZK technology without fully embracing web3 infrastructure. At a basic level, ZKPs unlock the ability to prove something is true without revealing the underlying data behind that statement. Ideally, a prover creates the proof, a verifier verifies it, and these two parties are completely isolated from each other in order to ensure fairness. That’s really it. There’s no reason this concept has to be trapped behind the learning curve of web3. ... AI’s potential for deception is well-established. However, there are ways we can harness AI’s creativity while still trusting its output. As artificial intelligence pervades every aspect of our lives, it becomes increasingly important that we know the models training the AIs we rely on are legitimate because if they aren’t, we could literally be changing history and not even realize it. With ZKML, or zero-knowledge machine learning, we avoid those potential pitfalls, and the benefits can still be harnessed by web2 projects that have zero interest in going onchain. Recently, the University of Southern California partnered with the Shoah Foundation to create something called IWitness, where users are able to speak or type directly to holograms of Holocaust survivors.


How to Make Security Auditing an Important Part of Your DevOps Processes

There's a difference between a security audit and a simple vulnerability scan, however. Security auditing is a much more comprehensive evaluation of various elements that make up an organization's cybersecurity posture. Because of the sheer amount of data that most businesses store and use on a daily basis, it's critical to ensure that it stays protected. Failure to do this can lead to costly data compliance issues(link is external) and also lead to significant financial losses. ... Quick development and rapid deployment are the primary focus of most DevOps practices. However, security has also become an equally, if not more important, component of modern-day software development. It's critical that security finds its way into every stage of the development lifecycle. Changing this narrative does, however, require everyone in the organization to place security higher up on their priority lists. This means the organization as a whole needs to develop a security-conscious business culture that helps to shape all the decisions made. ... Another way that automation can be used in software development is continuous security monitoring. In this scenario, specialized monitoring tools are used to regularly monitor an organization's system in real time.


The Critical Role of CISOs in Managing IAM, Including NHIs

As regulators catch up to the reality that NHIs pose the same (or greater) risks, organizations will be held accountable for securing all identities. This means enforcing least privilege for NHIs — just as with human users. It also means tracking the full lifecycle of machine identities, from creation to decommissioning, as well as auditing and monitoring API keys, tokens, and service accounts with the same rigor as employee credentials. Waiting for regulatory pressure after a breach is too late. CISOs must act proactively to get ahead of the curve on these coming changes. ... A modern IAM strategy must begin with comprehensive discovery and mapping of all identities across the enterprise. This includes understanding not just where the associated secrets are stored but also their origins, permissions, and relationships with other systems. Organizations need to implement robust secrets management platforms that can serve as a single source of truth, ensuring all credentials are encrypted and monitored. The lifecycle management of NHIs requires particular attention. Unlike human identities, which follow predictable patterns of employment and human lifestyles, machine identities require automated processes for creation, rotation, and decommissioning. 


Preparing the Workforce for an AI-Driven Economy: Skills of the Future

As part of creating awareness about AI, the opportunities that come with it, and its role in shaping our future, I speak at several global forums and conferences. This is the question I am frequently asked: How did you start your AI journey? Unlike the “hidden secret” that most would expect, my response is fairly simple: data. I had worked with data long enough that transitioning to AI seemed like a natural transition. Data is the core of AI, hence it is important to build data literacy first. It involves the ability to read, work with, analyze, and communicate data. In other words, interpreting data insights and using them to drive decision-making is an absolute must for everyone from junior employees to senior executives. No matter what is your role within an organization, honing this skill will serve you well in this AI-driven economy. Those who say that data is the new currency or the new oil are not entirely overstating its importance. ... AI is a highly collaborative field. No one person can build a high-performing, robust AI; it requires seamless collaboration across diverse teams. With diverse skills and backgrounds, a strong AI profile must possess the ability to communicate the results, the process, and the algorithms. If you want to ace a career in AI, be the person who can tailor the talk to the right audience and speak at the right altitude. 


Prioritizing data and identity security in 2025

First, it’s important to get the basics right. Yes, new security threats are emerging on an almost daily basis, along with solutions designed to combat them. Security and business leaders can get caught up in chasing the “shiny objects” making headlines, but the truth is that most organizations haven’t even addressed the known vulnerabilities in their existing environments. Major news headline-generating hacks were launched on the backs of knowable, solvable technological weaknesses. As tempting as it can be to focus on the latest threats, organizations need to get the basics squared away. Many organizations don’t even have multifactor authentication (MFA) enabled ... It’s not just businesses racing to adopt AI—cybercriminals are already leveraging AI tools to make their tactics significantly more effective. For example, many are using AI to create persuasive, error-free phishing emails that are much more difficult to spot. One of the biggest concerns is the fact that AI is lowering the barrier to entry for attackers—even novice hackers can now use AI to code dangerous, triple-threat ransomware. On the other end of the spectrum, well-resourced nation-states are using AI to create manipulative deepfake videos that look just like the real thing. Fortunately, strong security fundamentals can help combat AI-enhanced attack tactics, but it’s important to be aware of how the technology is being used.


Study reveals delays in SaaS implementations are costing Indian enterprises in crores

Delayed SaaS implementations create cascading effects, affecting both ongoing and future digital transformation initiatives. As per the study, 92.5% of Indian enterprises recognise that timely implementation is critical, while the remaining consider it somewhat important. The study found that 67% of enterprises reported increased costs due to extended deployment timelines, making implementation overruns a direct financial burden. 53% of the respondents indicated that delays hindered digital transformation progress, slowing down innovation and business growth. Additionally, 48% of enterprises experienced customer dissatisfaction, while 46% faced missed business revenue and opportunities, impacting overall business performance. ... To mitigate these challenges, enterprises are shifting toward a platform-driven approach to SaaS implementation. This model enables faster deployments by leveraging automation, reducing customisation efforts, and ensuring seamless interoperability. The IDC study highlights that 59% of enterprises recognise automation and DevOps practices as key factors in shortening deployment timelines. By leveraging advanced automation, organisations can minimise manual dependencies, reduce errors, and improve implementation speed. 


Quantum Breakthrough: New Study Uncovers Hidden Behavior in Superconductors

To produce an electric current in normal conductors between two points one needs to apply a voltage, which acts as the pressure that pushes electricity between two points. But because of a peculiar quantum tunneling process known as the “Josephson effect” current can flow between two superconductors without the need for an applied voltage. The FMFs influence this Josephson current in unique ways. In most systems, the current between two superconductors repeats itself at regular intervals. However, FMFs manifest themselves in a pattern of current that oscillates at half the normal rate, creating a unique signature that can help in their detection. ... One of the key findings revealed by Seradjeh and colleagues’ study is that the strength of the Josephson current—the amount of electrical flow—can be tuned using the “chemical potential” of the superconductors. Simply stated, the chemical potential acts as a dial that adjusts the properties of the material, and the researchers found that it could be modified by synching with the frequency of the external energy source driving the system. This could provide scientists a new level of control over quantum materials and opens up possibilities for applications in quantum information processing, where precise manipulation of quantum states is critical. 


Data Center Network Topology: A Guide to Optimizing Performance

To understand fully what this means, let’s step back and talk about how network traffic flows within a data center. Typically, traffic ultimately needs to move to and from servers. ... Data center network topology is important for several reasons: Network performance: Network performance hinges on the ability to move packets as quickly as possible and with minimal latency between servers and external endpoints. Poor network topologies may create bottlenecks that reduce network performance. Scalability: The amount of network traffic that flows through a data center may change over time. To accommodate these changes, network topologies must be flexible enough to scale. Cost-efficiency: Networking equipment can be expensive, and switches or routers that are under-utilized are a poor use of money. Ideally, network topology should ensure that switches and routers are used efficiently, but without approaching the point that they become overwhelmed and reduce network performance. Security: Although security is not a primary consideration when designing a network topology because it’s possible to enforce security policies using any common network design, topology does play a role in determining how easy it is to segment servers from the Internet and filter malicious traffic.


Ethics in action: Building trust through responsible AI development

The architecture discipline will always need to continuously evaluate the landscape of emerging compliance directions to synthesize how the overall definition and intent can be translated into actionable architecture and design that best enables compliance. Parallel to this is to ensure their implementations are auditable so that governing bodies can clearly see that regulatory mandates are being met. When applied, various capabilities will enable the necessary flexible designs and architectures with supporting patterns for sustainable agility to ensure the various checks and policies are being enforced. ... The heavy hand of governance can be a cause for diminished innovation, however, this doesn’t need to happen. The same capabilities and patterns used to ensure ethical behaviors and compliance can also be applied to stimulate sensible innovation. As new LLMS, models, agents, etc. emerge, flexible/agile architecture and best practices in responsive engineering can provide the ability to infuse new market entries into a given product, service or offering. Leveraging feature toggles and threshold logic will provide safe inclusion of emerging technologies. ... While managing compliance through agile solution designs and architectures promotes a trustworthy customer experience, it does come with a cost of greater complexity. 


NTT Unveils First Quantum Computing Architecture Separating Memory and Processor

In this study, researchers applied the design concept of the load-store-type architecture used in modern computers to quantum computing. In a load-store architecture, the device is divided into a memory and a processor to perform calculations. By exchanging data using two abstracted instructions, “load” and “store,” programs can be built in a portable way that does not depend on specific processor or memory device structures. Additionally, the memory is only required to hold data, allowing for high memory utilization. Load-store computation is often associated with an increase in computation time due to the limited memory bandwidth between memory and computation spaces. ... Researchers expect these findings to enable the highly efficient utilization of quantum hardware, significantly accelerating the practical application of quantum computation. Additionally, the high program portability of this approach helps to ensure the compatibility between hardware advancement, error correction methods at the lower layer and the development of technology at the higher layer, such as programming languages and compilation optimization. The findings will facilitate the promotion of parallel advanced research in large-scale quantum computer development.


Daily Tech Digest - March 02, 2025


Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr


Weak cyber defenses are exposing critical infrastructure — how enterprises can proactively thwart cunning attackers to protect us all

Weak cybersecurity isn’t merely a corporate issue — it’s a national security risk. The 2021 Colonial Pipeline attack disrupted energy supplies and exposed vulnerabilities in critical industries. Rising geopolitical tensions, especially with China, amplify these risks. Recent breaches attributed to state-sponsored actors have exploited outdated telecommunications equipment and other legacy systems, revealing how complacency in updating technology can put national security in danger. For instance, last year’s hack of U.S. and international telecommunications companies exposed phone lines used by top officials and compromised data from systems for surveillance requests, threatening national security. Weak cybersecurity at these companies risks long-term costs, allowing state-sponsored actors to access sensitive information, influence political decisions and disrupt intelligence efforts. ... No company can face today’s cyber threats on its own. Collaboration between private businesses and government agencies is more than helpful — it’s imperative. Sharing threat intelligence in real-time allows organizations to respond faster and stay ahead of emerging risks. Public-private partnerships can also level the playing field by offering smaller companies access to resources like funding and advanced security tools they might not otherwise afford.


Evaluating the CISO

Delegation skills are an essential component that should be evaluated separately in this area. Effective delegation is essential to prevent becoming a bottleneck, as micromanagement is unsuitable for the CISO role. Delegating complex tasks not only lightens your load but also helps foster the team’s overall competence. Without strong delegation skills, CISOs cannot rate themselves highly in their relationship with the internal security team. ... A CISO is hired to lead, manage, and support specific projects or programs such as migrating to a cloud or hybrid infrastructure, implementing zero-trust principles, launching security awareness initiatives, or assessing risks and creating a roadmap for post-quantum cryptography implementation. The success of these initiatives ultimately falls under the CISO’s responsibility. To execute these programs effectively, the CISO relies heavily on its team and internal organizational peers. As such, building strong relationships with both is essential for successfully delivering projects. ... A CISO must have responsibility for the information security budget, which includes funding for the team, tools, and services. Without direct control over the budget, it becomes challenging to rate the relationship with management highly, as budget ownership is a critical aspect of the CISO’s role.


Unraveling Large Language Model Hallucinations

You might have seen model hallucinations. They are the instances where LLMs generate incorrect, misleading, or entirely fabricated information that appears plausible. These hallucinations happen because LLMs do not “know” facts in the way humans do; instead, they predict words based on patterns in their training data. ... Supervised Fine-Tuning makes the model capable. However, even a well-trained model can generate misleading, biased, or unhelpful responses. Therefore, Reinforcement Learning with Human Feedback is required to align it with human expectations. We start with the assistant model, trained by SFT. For a given prompt we generate multiple model outputs. Human labelers rank or score multiple model outputs based on quality, safety, and alignment with human preferences. We use these data to train a whole separate neural network that we call a reward model. The reward model imitates human scores. It is a simulator of human preferences. It is a completely separate neural network, probably with a transformer architecture, but it is not a language model in the sense that it generates diverse language. It’s just a scoring model.


How to Communicate the Business Value of Master Data Management

In an ideal scenario, MDM is integral to a broader D&A strategy, highlighting how D&A supports the organization's strategic goals. The strategy aligns with these goals, prioritizes the business outcomes it will support, and details what is needed to achieve them. Therefore, leaders must first understand and prioritize the explicit business outcomes that MDM will support before creating an MDM strategy. In other words, "improving decision-making" is not good enough. "Increase customer service levels by 5% by end of December 2025" is the level of detail required. D&A leaders may recognize that master data is causing a problem or limiting an opportunity, which is where they would rely on an MDM. If this is the case, those D&A leaders should consider questions that help identify the problem, KPIs, and key stakeholders in these cases. These questions help identify potential business outcomes that MDM could support. Figure 1 provides a worksheet to build this initial picture and facilitate stakeholder discussions. The worksheet maps high-level goals onto a run-grow-transform framework, which could also be represented by three columns for the primary business value drivers: risk, revenue, and cost.


4 ways to get your business ready for the agentic AI revolution

Agents could be used eventually, but only once a partnership approach identifies the right opportunities. "Agents are becoming a big part of how generative AI and machine learning are used in business today. The way agents will be used in travel will be fascinating to watch. I think this technology will certainly be a part of the mix," he said. "The process for Hyatt will be to find the right technologies -- and we'll do that in close partnership with our business leaders and the technology teams that run the applications. We'll then provide the AI services to drive those transitions for the business." ... Keith Woolley, chief digital and information officer at the University of Bristol, is another digital leader who sees the potential benefits of agents. However, he said these advantages will become manifest over the longer term. "We are looking at agentic AI, but we're not implementing it yet," he said. "We sit as a management team and ask questions like, 'Should we do our admissions process using agentic AI? What would be the advantage?'" Woolley told ZDNET he could envision a situation in which AI and automation help assess and inform candidates worldwide about the status of their applications.


Cloud Giants Collaborate on New Kubernetes Resource Management Tool

The core innovation of kro is the introduction of the ResourceGraphDefinition custom resource. kro encapsulates a Kubernetes deployment and its dependencies into a single API, enabling custom end-user interfaces that expose only the parameters applicable to a non-platform engineer. This masking hides the complexity of API endpoints for Kubernetes and cloud providers that are not useful in a deployment context. ... Kro works seamlessly with the existing cloud provider Kubernetes extensions that are available to manage cloud resources from Kubernetes. These are AWS Controllers for Kubernetes (ACK), Google's Config Connector (KCC), and Azure Service Operator (ASO). kro enables standardised, reusable service templates that promote consistency across different projects and environments, with the benefit of being entirely Kubernetes-native. It is still in the early stages of development. "As an early-stage project, kro is not yet ready for production use, but we still encourage you to test it out in your own Kubernetes development environments," the post states. ... Most significantly for the Crossplane community, Farcic questioned kro's purpose given its functional overlap with existing tools. "kro is serving more or less the same function as other tools created a while ago without any compelling improvement," he observed. 


Why a different approach to AIOps is needed for SD-WAN

AIOps tools enhance efficiency by seamlessly integrating with IT management tools, enabling proactive issue identification and streamlining IT management processes. But more than that, they optimize an organization’s network by improving the performance, efficiency, and dependability of its network resources to ensure optimal user experience. Regarding infrastructure, many organizations now rely on SD-WAN – software-defined wide area network – to manage and optimize data traffic across different types of networks efficiently. SD-WAN is an effective way to connect the organization and provide users with application access. It helps businesses improve their network performance, cut costs, and be more flexible by easily connecting to various network types. ... AIOps tools use the information extracted from SD-WAN systems and autonomously resolve issues without human intervention. In other words, AIOps tools utilize predictive analytics to forecast future events or outcomes related to network operations. This makes the whole system run smoother and more reliably, while machine learning algorithms can use this historical data to make predictions and proactively improve the performance of critical applications.


AI-Driven Threat Detection and the Need for Precision

AI algorithms, particularly those based on machine learning, excel at sifting through massive datasets and identifying patterns that would be nearly impossible for us mere humans to spot. An AI system might analyze network traffic patterns to identify unusual data flows that could indicate a data exfiltration attempt. Alternatively, it could scan email attachments for malicious code that traditional antivirus software might miss. Ultimately, AI feeds on context and content. The effectiveness of these systems in protecting your security posture(link is external) is inextricably linked to the quality of the data they are trained on and the precision of their algorithms. ... Finally, AI-driven threat detection may not eradicate human expertise. Skilled security professionals should still oversee AI systems and make informed decisions based on their own contextual expertise and experience. Human oversight validates the AI's findings, and threat detection algorithms may not be able to totally replace the critical thinking and intuition of human analysts. There may come a time when human professionals exist in AI's shadow. Yet, at this time, combining the power of AI with human knowledge and a commitment to continuous learning can form the building blocks for a sophisticated defense program. 


From Ambiguity to Accountability: Analyzing Recommender System Audits under the DSA

In these early years of the DSA, a range of stakeholders – online platforms, civil society, the European Commission (EC), and national Digital Service Coordinators (DSCs) – must experiment, identify good practices, and share lessons learned. Such iteration is important to ensure an adaptive DSA regime that spurs innovation and responds to shifting technologies, risks, and mitigation strategies. The need for iteration and flexibility, however, should not mean the audits fail to deliver on their potential as vehicles for transparency and accountability. The first round of independent audits of recommender systems reveals clear areas for immediate improvement. Because the core definitions and methodologies were developed independently by platforms and auditors, significant inconsistencies exist in both risk assessment and audit processes. ... The DSA requires the main parameters of recommender systems to be spelled out in plain and intelligible language. What does this concretely mean in the recommender system context? Is it free of “acronyms or complex/technical terminology” (Pinterest), “straightforward vocabulary and easy to perceive, understand, or interpret” (Snap), or “written for a general audience with varying technical skill levels, inclusive of all users” (TikTok)? There's a subtle difference in expectations associated with each framing. These terms don’t need to be defined in a vacuum.


Cybersecurity in retail: What does the future hold?

In the coming year, cybersecurity experts predict attackers will increasingly target Generative AI models used by retailers, creating significant potential for operational disruptions and data breaches. These AI systems, now critical to retail operations, are vulnerable to sophisticated attacks that could compromise customer service efficiency and expose critical business vulnerabilities. The core risk lies in the sophisticated ways attackers can exploit AI’s complex decision-making processes, turning what was once a technological advantage into a potential security liability. Retailers must recognise that their AI systems are not just technological tools, but potential entry points for cybercriminal activities. ... The complexity and distribution of digital ecosystems make them prime targets during high-demand periods. For example, as we have seen in the past, cyberattacks that hit supply chains can cause major delays and financial loss. These incidents underscore the vulnerabilities in supply chains during peak times of the year​. In 2025, expect a rise in supply chain attacks during the holiday season, targeting ecommerce platforms and logistics providers, which could disrupt product availability and shipping.