Showing posts with label AIOps. Show all posts
Showing posts with label AIOps. Show all posts

Daily Tech Digest - March 20, 2026


Quote for the day:

"Nothing so conclusively proves a man's ability to lead others as what he does from day to day to lead himself." -- Thomas J. Watson


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 23 mins • Perfect for listening on the go.


Rethinking Cyber Preparedness in Age of AI Cyberwarfare

The article "Rethinking Cyber Preparedness in the Age of AI and Cyberwarfare" highlights a critical disconnect termed the "readiness paradox," where nearly 80% of IT leaders feel prepared for cyberwarfare despite over half of organizations suffering AI-driven attacks recently. According to Armis’s latest report, traditional defense mechanisms are failing against agentic AI, which nation-state actors now deploy for rapid reconnaissance and lateral movement. As autonomous agents begin weaponizing zero-day exploits faster than human researchers can categorize them, the attack surface has expanded to include overlooked assets like building management systems and IoT devices. The financial stakes are escalating, with average ransomware payouts reaching $11.6 million, often exceeding annual security budgets. To counter these sophisticated threats, the article emphasizes that organizations must achieve superior visibility into their internal environments and map every network asset. Furthermore, IT leaders should embrace AI-driven security policies rather than ineffective bans to combat the risks of "shadow AI" used by employees. Ultimately, true resilience depends on whether a company knows its own infrastructure better than its adversaries, transforming AI from a liability into a vital defensive tool for modern geopolitical threats.


Are small language models finally having their moment?

The rapid ascent of Small Language Models (SLMs) marks a strategic shift in the artificial intelligence landscape, as enterprises seek to mitigate the immense costs and security risks associated with massive frontier models. Unlike their trillion-parameter counterparts, SLMs operate with significantly fewer parameters—ranging from millions to a few billion—allowing them to run locally on laptops or mobile devices without internet connectivity. This architectural efficiency ensures superior data privacy and regulatory compliance, particularly in sensitive sectors like healthcare, defense, and banking where proprietary data must remain on-premises. While Large Language Models (LLMs) excel at general synthesis and creative tasks, SLMs are increasingly preferred for specialized, rules-based functions such as code completion and document classification. Gartner even projects that by 2027, task-specific SLM usage will triple that of LLMs. Through techniques like knowledge distillation and pruning, these compact models offer a cost-effective, energy-efficient alternative that delivers high performance with minimal latency. Consequently, the industry is moving toward a hybrid ecosystem where SLMs handle secure, specialized operations while LLMs provide broader abstraction, proving that in the evolving world of enterprise AI, bigger is not always better for every specific business need.


What it takes to level up your org’s AI maturity

To advance an organization's AI maturity, leaders must transition from merely "doing AI" to driving substantial business impact through an outcomes-based, AI-first strategy. According to experts Afshean Talasaz and Zar Toolan, this shift requires CIOs to adopt an "innovator-operator" mindset, balancing the need for rapid evolution with the stability required for consistent execution. Maturity is categorized into three levels, with the most advanced organizations enjoying a first-mover advantage led by CEO-backed agendas. A critical component of this journey is the "from-to so-that" modeling, which aligns data and AI initiatives with specific strategic outcomes like trust, business value, and reduced time to value. Winners in this space prioritize long-term infrastructure investments and rigorous data cleanup while securing short-term wins to demonstrate ROI. Furthermore, scaling AI successfully demands an intense focus on granular details rather than abstract concepts; without getting the technical and operational nuances right, true scale remains elusive. Ultimately, the transformation is a "team sport" requiring absolute alignment across the C-suite and a commitment to reducing internal volatility. By preparing thoroughly and maintaining consistent execution, organizations can move beyond operational tools to treat sovereign enterprise data as a powerful competitive moat.


The Power Ladder Architecture—A System For Turning Risk Work Into Decisions, Delivery And Proof

Maman Ibrahim’s article, "The Power Ladder Architecture," addresses the critical gap between identifying organizational risks and executing meaningful change. Ibrahim argues that risk management often fails not because of a lack of effort, but because it fails to convert analysis into "leadership work." Many teams present polished dashboards that provide a false sense of security while stalling when faced with difficult trade-offs. The Power Ladder is proposed as a solution, shifting the focus from mere reporting to three tangible outcomes: decisions, delivery, and proof. First, "decisions" require framing risks as binary choices for leadership, forcing clarity on trade-offs like speed versus security. Second, "delivery" ensures that once a choice is made, it is translated into structured tasks with clear ownership and deadlines. Finally, "proof" demands verifiable evidence that the risk profile has actually improved, rather than just being documented. By implementing this architecture, organizations can move beyond ceremonial risk management and establish a high-altitude system where audit concerns and cyber exposures are effectively neutralized. This approach transforms risk work into a powerful engine for operational resilience, ensuring that every identified vulnerability leads to a documented decision and a validated result.


The espionage reality: Your infrastructure is already in the collection path

Modern enterprises are increasingly caught in the "collection path" of global espionage, not necessarily as primary targets, but because they utilize the same centralized infrastructure as their adversaries. This shift highlights a structural exposure problem where shared dependencies—such as telecommunications, cloud services, and identity layers—become conduits for siphoning data and monitoring authentication. When national telecommunications providers are compromised, attackers can collect intelligence directly from the pathways an organization relies on, rendering traditional internal security measures insufficient. The article emphasizes that security leaders must move beyond internal asset protection to evaluate risk through the lens of upstream dependencies. Key recommendations include demanding integrity attestation from providers, reducing implicit trust in external networks, and hardening session layers to mitigate token theft and impersonation. Furthermore, the persistence of advanced persistent threats (APTs) within backbone infrastructure is now influencing the cyber insurance market, leading to higher premiums and stricter exclusions. Ultimately, organizations must integrate intelligence-driven assessments into their governance models, acknowledging that upstream compromise is a structural reality. To maintain resilience, CISOs must treat every external partner as an active component of their threat surface and design systems that degrade safely under inevitable compromise.


A direct approach to satellite communication

The article "A Direct Approach to Satellite Communication" on Data Center Dynamics explores the transformative shift in how satellite systems integrate with terrestrial network infrastructures. It highlights the evolution from traditional, isolated satellite setups toward a more "direct" and seamless integration within the broader data center and cloud ecosystem. The piece details how Low Earth Orbit (LEO) constellations and advancements in software-defined networking (SDN) are reducing latency and increasing bandwidth, making satellite links a viable, high-performance extension for enterprise networks rather than just a backup for remote locations. By treating space-based assets as reachable network nodes, providers can offer direct cloud connectivity, bypassing complex ground-station hops that previously hampered speed. This integration allows data centers to achieve greater resiliency and global reach, facilitating real-time data processing for edge computing and IoT applications in underserved regions. Ultimately, the analysis suggests that the convergence of space and ground infrastructure is turning satellite communication into a mainstream pillar of modern digital architecture, effectively "cloudifying" the final frontier to support the next generation of global, high-speed connectivity.


AI will accelerate tech job growth - former Tesla president explains where and why

In this ZDNet article, Jon McNeill, former Tesla president and current CEO of DVx Ventures, challenges the "tech job apocalypse" narrative by highlighting how artificial intelligence will actually accelerate employment in specific sectors. McNeill argues that the growing complexity of AI-driven ecosystems creates an intense demand for human expertise, particularly in infrastructure and networking. As organizations deploy massive server farms and sophisticated GPU clusters, the need for skilled professionals to manage, synchronize, and maintain these resilient networks becomes critical. While AI may handle basic coding and quality control, McNeill emphasizes that high-level architectural design remains a uniquely human domain, requiring "smart computer scientists" to navigate multi-layered model stacks. A core takeaway from his experience is the "automate last" principle, which suggests that businesses must first simplify and optimize their manual processes before introducing automation. By doing so, companies avoid the trap of embedding complexity into rigid code. Ultimately, McNeill urges technology professionals to move up the value chain, focusing on architectural innovation and process optimization, while cautioning against using expensive AI solutions where simpler, human-led methods are more effective and efficient for long-term growth.


Are You the Problem at Work? These 15 Questions Will Reveal the Truth.

In the Entrepreneur article "15 Questions That Reveal If You’re the Problem at Work," author Roy Dekel challenges leaders to look inward rather than blaming external factors for workplace issues like high turnover or low engagement. The piece argues that while many professionals prioritize strategic optimization, the true bottleneck is often a lack of emotional intelligence (EQ). To help leaders identify their blind spots, Dekel presents fifteen diagnostic questions that assess one’s "emotional wake." These include whether a team falls silent when the leader enters the room, how the leader reacts to bad news, and whether they value outcomes over effort. High EQ is framed as the foundation of psychological safety; leaders who possess it tend to listen more, apologize easily, and regulate their emotions under pressure, ultimately making their employees feel "bigger" rather than "smaller." By honestly answering these questions, managers can transition from being a source of tension to becoming a catalyst for trust and innovation. The article concludes that leadership is effectively the environment in which others must work, emphasizing that self-awareness is a learnable skill that can fundamentally transform organizational culture and employee satisfaction.


Aura breach and AI companion app flaws sharpen privacy fears

The recent security report highlighting widespread vulnerabilities in AI companion apps, coupled with a significant data exposure at identity protection firm Aura, has intensified global privacy concerns regarding the management of intimate user data. Aura recently confirmed that a targeted phishing attack on an employee allowed unauthorized access to approximately 900,000 records, including names and email addresses, though sensitive financial data remained secure. Simultaneously, research by Oversecured revealed that seventeen popular AI companion and dating simulator apps—boasting over 150 million installs—contain hundreds of critical and high-severity security flaws. These vulnerabilities, ranging from hardcoded cloud credentials to exploitable chat interfaces, potentially expose deeply personal information such as erotic chat histories, sexual orientation, and even suicidal thoughts. Despite the sensitivity of this data, the report emphasizes a regulatory "blind spot," noting that while authorities have addressed child safety and broad privacy disclosures, they have yet to enforce rigorous application-layer security standards. Together, these incidents underscore the growing risk of a digital era where companies frequently fail to protect the highly personal details they solicit from users. This convergence of corporate breaches and structural app flaws highlights an urgent need for stricter oversight and improved security architectures across the global network ecosystem.


The rise of the intelligent agent: Why human-in-the-loop is the future of AIOps

The article "The Rise of the Intelligent Agent: Why Human-in-the-Loop is the Future of AIOps" examines the transformative role of Agentic AI in IT operations through an interview with Srinivasa Raghavan S of ManageEngine. It argues that intelligent agents should amplify human expertise rather than replace it, specifically by automating repetitive tasks and filtering out telemetry noise to provide actionable insights. A central theme is the "human-in-the-loop" architecture, which integrates automation with strict policy guardrails, orchestration, and auditability to ensure engineers maintain control. These systems utilize machine learning for predictive anomaly detection and causal AI for rapid root-cause analysis, significantly decreasing mean time to resolution. By transitioning from reactive monitoring to self-driving observability, enterprises can better align technical health with business goals like customer experience and uptime SLAs. Although hybrid and multi-cloud environments introduce visibility challenges, unified observability platforms help manage this complexity. Ultimately, the article advocates for a phased adoption of autonomous remediation, building trust through transparent, guarded processes that combine machine speed with human oversight to navigate the intricacies of modern digital infrastructure effectively and safely.

Daily Tech Digest - June 24, 2025


Quote for the day:

"When you stop chasing the wrong things you give the right things a chance to catch you." -- Lolly Daskal


Why Agentic AI Is a Developer's New Ally, Not Adversary

Because agentic AI can complete complex workflows rather than simply generating content, it opens the door to a variety of AI-assisted use cases in software development that extend far beyond writing code — which, to date, has been the main way that software developers have leveraged AI. ... But agentic AI eliminates the need to spell out instructions or carry out manual actions entirely. With just a sentence or two, developers can prompt AI to perform complex, multi-step tasks. It's important to note that, for the most part, agentic AI use cases like those described above remain theoretical. Agentic AI remains a fairly new and quickly evolving field. The technology to do the sorts of things mentioned here theoretically exists, but existing tool sets for enabling specific agentic AI use cases are limited. ... It's also important to note that agentic AI poses new challenges for software developers. One is the risk that AI will make the wrong decisions. Like any LLM-based technology, AI agents can hallucinate, causing them to perform in undesirable ways. For this reason, it's tough to imagine entrusting high-stakes tasks to AI agents without requiring a human to supervise and validate them. Agentic AI also poses security risks. If agentic AI systems are compromised by threat actors, any tools or data that AI agents can access (such as source code) could also be exposed.


Modernizing Identity Security Beyond MFA

The next phase of identity security must focus on phishing-resistant authentication, seamless access, and decentralized identity management. The key principle guiding this transformation is a principle of phishing resistance by design. The adoption of FIDO2 and WebAuthn standards enables passwordless authentication using cryptographic key pairs. Because the private key never leaves the user’s device, attackers cannot intercept it. These methods eliminate the weakest link — human error — by ensuring that authentication remains secure even if users unknowingly interact with malicious links or phishing campaigns. ... By leveraging blockchain-based verified credentials — digitally signed, tamper-evident credentials issued by a trusted entity — wallets enable users to securely authenticate to multiple resources without exposing their personal data to third parties. These credentials can include identity proofs, such as government-issued IDs, employment verification, or certifications, which enable strong authentication. Using them for authentication reduces the risk of identity theft while improving privacy. Modern authentication must allow users to register once and reuse their credentials seamlessly across services. This concept reduces redundant onboarding processes and minimizes the need for multiple authentication methods. 


The Pros and Cons of Becoming a Government CIO

Seeking a job as a government CIO offers a chance to make a real impact on the lives of citizens, says Aparna Achanta, security architect and leader at IBM Consulting -- Federal. CIOs typically lead a wide range of projects, such as upgrading systems in education, public safety, healthcare, and other areas that provide critical public services. "They [government CIOs] work on large-scale projects that benefit communities beyond profits, which can be very rewarding and impactful," Achanta observed in an online interview. "The job also gives you an opportunity for leadership growth and the chance to work with a wide range of departments and people." ... "Being a government CIO might mean dealing with slow processes and bureaucracy," Achanta says. "Most of the time, decisions take longer because they have to go through several layers of approval, which can delay projects.” Government CIOs face unique challenges, including budget constraints, a constantly evolving mission, and increased scrutiny from government leaders and the public. "Public servants must be adept at change management in order to be able to pivot and implement the priorities of their administration to the best of their ability," Tamburrino says. Government CIOs are often frustrated by a hierarchy that runs at a far slower pace than their enterprise counterparts.


Why work-life balance in cybersecurity must start with executive support

Watching your mental and physical health is critical. Setting boundaries is something that helps the entire team, not just as a cyber leader. One rule we have in my team is that we do not use work chat after business hours unless there are critical events. Everyone needs a break and sometimes hearing a text or chat notification can create undue stress. Another critical aspect of being a cybersecurity professional is to hold to your integrity. People often do not like the fact that we have to monitor, report, and investigate systems and human behavior. When we get pushback for this with unprofessional behavior or defensiveness, it can often cause great personal stress. ... Executive leadership plays one of the most critical roles in supporting the CISO. Without executive level support, we would be crushed by the demands and the frequent conflicts of interest we experience. For example, project managers, CIOs, and other IT leadership roles might prioritize budget, cost, timelines, or other needs above security. A security professional prioritizes people (safety) and security above cost or timelines. The nature of our roles requires executive leadership support to balance the security and privacy risk (and what is acceptable to an executive). I think in several instances the executive board and CEOs understand this, but we are still a growing profession and there needs to be more education in this area.


Building Trust in Synthetic Media Through Responsible AI Governance

Relying solely on labeling tools faces multiple operational challenges. First, labeling tools often lack accuracy. This creates a paradox: inaccurate labels may legitimize harmful media, while unlabelled content may appear trustworthy. Moreover, users may not view basic AI edits, such as color correction, as manipulation, while opinions differ on changes like facial adjustments or filters. It remains unclear whether simple colour changes require a label, or if labeling should only occur when media is substantively altered or generated using AI. Similarly, many synthetic media artifacts may not fit the standard definition of pornography, such as images showing white substances on a person’s face; however, they can often be humiliating. ... Second, synthetic media use cases exist on a spectrum, and the presence of mixed AI- and human-generated content adds complexity and uncertainty in moderation strategies. For example, when moderating human-generated media, social media platforms only need to identify and remove harmful material. In the case of synthetic media, it is often necessary to first determine whether the content is AI-generated and then assess its potential harm. This added complexity may lead platforms to adopt overly cautious approaches to avoid liability. These challenges can undermine the effectiveness of labeling.


How future-ready leadership can power business value

Leadership in 2025 requires more than expertise; it demands adaptability, compassion, and tech fluency. “Leadership today isn’t about having all the answers; it’s about creating an environment where teams can sense, interpret, and act with speed, autonomy, and purpose,” said Govind. As the learning journey of Conduent pivots from stabilization to growth, he shared that the leaders need to do two key things in the current scenario: be human-centric and be digitally fluent. Similarly, Srilatha highlighted a fundamental shift happening among the leaders: “Leaders today must lead with both compassion and courage while taking tough decisions with kindness.” She also underlined the rising importance of the three Rs in modern leadership: Reskilling, resilience, and rethinking. ... Govind pointed to something deceptively simple: acting on feedback. “We didn’t just collect feedback, we analyzed sentiment, made changes, and closed the loop. That made stakeholders feel heard.” This approach led Conduent to experiment with program duration, where they went from 12 to 8 to 6 months.’ “Learning is a continuum, not a one-off event,” Govind added. ... Leadership development is no longer optional or one-size-fits-all. It’s a business imperative—designed around human needs and powered by digital fluency.


The CISO’s 5-step guide to securing AI operations

As AI applications extend to third parties, CISOs will need tailored audits of third-party data, AI security controls, supply chain security, and so on. Security leaders must also pay attention to emerging and often changing AI regulations. The EU AI Act is the most comprehensive to date, emphasizing safety, transparency, non-discrimination, and environmental friendliness. Others, such as the Colorado Artificial Intelligence Act (CAIA), may change rapidly as consumer reaction, enterprise experience, and legal case law evolves. CISOs should anticipate other state, federal, regional, and industry regulations. ... Established secure software development lifecycles should be amended to cover things such as AI threat modeling, data handling, API security, etc. ... End user training should include acceptable use, data handling, misinformation, and deepfake training. Human risk management (HRM) solutions from vendors such as Mimecast may be necessary to keep up with AI threats and customize training to different individuals and roles. ... Simultaneously, security leaders should schedule roadmap meetings with leading security technology partners. Come to these meetings prepared to discuss specific needs rather than sit through pie-in-the-sky PowerPoint presentations. CISOs should also ask vendors directly about how AI will be used for existing technology tuning and optimization. 


State of Open Source Report Reveals Low Confidence in Big Data Management

"Many organizations know what data they are looking for and how they want to process it but lack the in-house expertise to manage the platform itself," said Matthew Weier O'Phinney, Principal Product Manager at Perforce OpenLogic. "This leads to some moving to commercial Big Data solutions, but those that can't afford that option may be forced to rely on less-experienced engineers. In which case, issues with data privacy, inability to scale, and cost overruns could materialize." ... EOL operating system, CentOS Linux, showed surprisingly high usage, with 40% of large enterprises still using it in production. While CentOS usage declined in Europe and North America in the past year, it is still the third most used Linux distribution overall (behind Ubuntu and Debian), and the top distribution in Asia. For teams deploying EOL CentOS, 83% cited security and compliance as their biggest concern around their deployments. ... "Open source is the engine driving innovation in Big Data, AI, and beyond—but adoption alone isn't enough," said Gael Blondelle, Chief Membership Officer of the Eclipse Foundation. "To unlock its full potential, organizations need to invest in their people, establish the right processes, and actively contribute to the long-term sustainability and growth of the technologies they depend on."


Cybercrime goes corporate: A trillion-dollar industry undermining global security

The CaaS market is a booming economy in the shadows, driving annual revenues into billions. While precise figures are elusive due to its illicit nature, reports suggest it's a substantial and growing market. CaaS contributes significantly, and the broader cybersecurity services market is projected to reach hundreds of billions of dollars in the coming years. If measured as a country, cybercrime would already be the world's third-largest economy, with projected annual damages reaching USD 10.5 trillion by 2025, as per some cybersecurity ventures. This growth is fueled by the same principles that drive legitimate businesses: specialisation, efficiency, and accessibility. CaaS platforms function much like dark online marketplaces. They offer pre-made hacking kits, phishing templates, and even access to already compromised computer networks. These services significantly lower the entry barrier for aspiring criminals. ... Enterprises must recognise that attackers often hit multiple systems simultaneously—computers, user identities, and cloud environments. This creates significant "noise" if security tools operate in isolation. Relying on many disparate security products makes it difficult to gain a holistic view and understand that seemingly separate incidents are often part of a single, coordinated attack.


Modern apps broke observability. Here’s how we fix it.

For developers, figuring out where things went wrong is difficult. In a survey looking at the biggest challenges to observability, 58% of developers said that identifying blind spots is a top concern. Stack traces may help, but they rarely provide enough context to diagnose issues quickly; developers chase down screenshots, reproduce problems, and piece together clues manually using the metric and log data from APM tools; a bug that could take 30 minutes to fix ends up consuming days or weeks. Meanwhile, telemetry data accumulates in massive volumes—expensive to store and hard to interpret. Without tools to turn data into insight, you’re left with three problems: high bills, burnout, and time wasted fixing bugs—bugs that don’t have a major impact on core business functions or drive revenue when increasing developer efficiency is a top strategic goal at organizations. ... More than anything, we need a cultural change. Observability must be built into products from the start. That means thinking early about how we’ll track adoption, usage, and outcomes—not just deliver features. Too often, teams ship functionality only to find no one is using it. Observability should show whether users ever saw the feature, where they dropped off, or what got in the way. That kind of visibility doesn’t come from backend logs alone.

Daily Tech Digest - June 23, 2025


Quote for the day:

"Sheep are always looking for a new shepherd when the terrain gets rocky." -- Karen Marie Moning


The 10 biggest issues IT faces today

“The AI explosion and how quickly it has come upon us is the top issue for me,” says Mark Sherwood, executive vice president and CIO of Wolters Kluwer, a global professional services and software firm. “In my experience, AI has changed and progressed faster than anything I’ve ever seen.” To keep up with that rapid evolution, Sherwood says he is focused on making innovation part of everyday work for his engineering team. ... “Modern digital platforms generate staggering volumes of telemetry, logs, and metrics across an increasingly complex and distributed architecture. Without intelligent systems, IT teams drown in alert fatigue or miss critical signals amid the noise,” he explains. “What was once a manageable rules-based monitoring challenge has evolved into a big data and machine learning problem.” He continues, saying, “This shift requires IT organizations to rethink how they ingest, manage, and act upon operational data. It’s not just about observability; it’s about interpretability and actionability at scale. ... CIOs today are also paying closer attention to geopolitical news and determining what it means for them, their IT departments, and their organizations. “These are uncertain times geopolitically, and CIOs are asking how that will affect IT portfolios and budgets and initiatives,” Squeo says.


Clouded judgement: Resilience, risk and the rise of repatriation

While the findings reflect growing concern, they also highlight a strategic shift, with 78% of leaders now considering digital sovereignty when selecting tech partners, and 68% saying they will only adopt AI services where they have full certainty over data ownership. For some, the answer is to take back control. Cloud repatriation is gaining some traction, at least in terms of mindset, but as yet, this is not translating into a mass exodus from the hyperscalers. And yet, calls for digital sovereignty are getting louder. In Europe, the Euro-Stack open letter has reignited the debate, urging policymakers to champion a competitive, sovereign digital infrastructure. But while politics might be a trigger, the key question is not whether businesses are abandoning cloud (most aren’t) but whether the balance of cloud usage is changing, driven as much by cost as performance needs and rising regulatory risks. ... “Despite access to cloud cost-optimisation teams, there was limited room to reduce expenses,” says Jonny Huxtable, CEO of LinkPool. After assessing bare-metal and colocation options, LinkPool decided to move fully to Pulsant’s colocation service. The company claims the move achieved a 90% to 95% cost reduction alongside major performance improvements and enhanced disaster recovery capabilities.


Cookie management under the Digital Personal Data Protection Act, 2023

Effective cookie management under the DPDP Act, as detailed in the BRDCMS, requires real time updates to user preferences. Users must have access to a dedicated cookie preferences interface that allows them to modify or revoke their consent without undue complexity or delay. This interface should be easily accessible, typically through privacy settings or a dedicated cookie management dashboard. The real-time nature of these updates is crucial for maintaining compliance with the principles of consent as enshrined under the DPDP Act. When a user withdraws consent for specific cookie categories, the system must immediately cease the collection and processing of data through those cookies, ensuring that the user’s privacy preferences are respected without delay. Transparency is one of the fundamental pillars of the DPDP Act and extends to cookie usage disclosure. While the DPDP Act itself remains silent on specific cookie policies, the BRDCMS mandates the provision of a clear and accessible cookie policy. Organisations must provide clear and accessible cookie policies which outline the purposes of cookie usage, the data sharing practices and the implications of different consent choices. The cookie policy serves as a comprehensive resource enabling users to make informed decisions of their consent preferences. 


AI agents win over professionals - but only to do their grunt work, Stanford study finds

According to the report, the majority of workers are ready to embrace agents for the automation of low-stakes and repetitive tasks, "even after reflecting on potential job loss concerns and work enjoyment." Respondents said they hoped to focus on more engaging and important tasks, mirroring what's become something of a marketing mantra among big tech companies pushing AI agents: that these systems will free workers and businesses from drudgery, so they can focus on more meaningful work. The authors also noted "critical mismatches" between the tasks that AI agents are being deployed to handle -- such as software development and business analysis -- and the tasks that workers are actually looking to automate. ... The study could have big implications for the future of human-AI collaboration in the workplace. Using a metric that they call the Human Agency Scale (HAS), the authors found "that workers generally prefer higher levels of human agency than what experts deem technologically necessary." ... The report further showed that the rise of AI automation is causing a shift in the human skills that are most valued in the workplace: information-processing and analysis skills, the authors said, are becoming less valuable as machines become increasingly competent in these domains, while interpersonal skills -- including "assisting and caring for others" -- is more important than ever.


New OLTP: Postgres With Separate Compute and Storage

The traditional methods for integrating databases are complex and not suited to AI, Xin said. The challenge lies in integrating analytics and AI with transactional workloads. Consider what developers would do when adding a feature to a code base, Xin said in his keynote address at the Data + AI Summit. They’d create a new branch of the codebase and make changes to the new branch. They’d use that branch to check bugs, perform testing and so on. Xin said creating a new branch is an instant operation. What’s the equivalent for databases? You only clone your production databases. It might take days. How do you set up secure networking? How do you create ETL pipelines and log data from one to another? ... Streaming is now a first-class citizen in the enterprise, Mohan told me. The separation of compute and storage makes a difference. We are approaching an era when applications will scale infinitely, both in terms of the number of instances and their scale-out capabilities. And that leads us to new questions about how we start to think about evaluation, observability and semantics. Accuracy matters. ... ADP may have the world’s best payroll data, Mohan said, but then that data has to be processed through ETL into an analytics solution like Databricks. Then comes the analytics and the data science work. The customer has to perform a significant amount of data engineering work and preparation.


Can AI Save Us from AI? The High-Stakes Race in Cybersecurity

Reluctant executives and budget hawks can shoulder some of the responsibility for slow AI adoption, but they’re hardly the only barriers. Increasingly, employees are voicing legitimate concerns about surveillance, privacy and the long-term impact of automation on job security. At the same time, enterprises may face structural issues when it comes to integration: fragmented systems, a lack of data inventory and access controls, and other legacy architectures can also hinder the secure integration and scalability of AI-driven security solutions. Meanwhile, bad actors face none of these considerations. They have immediate, unfettered access to open-source AI tools, which can enhance the speed and force of an attack. They operate without AI tool guardrails, governance, oversight or ethical constraints. ... Insider threat detection is also maturing. AI models can detect suspicious behavior, such as unusual access to data, privilege changes or timing inconsistencies, that may indicate a compromised account or insider threat. Early adopters, such as financial institutions, are using behavioral AI to flag synthetic identities by spotting subtle deviations that traditional tools often lack. They can also monitor behavioral intent signals, such as a worker researching resignation policies before initiating mass file downloads, providing early warnings of potential data exfiltration.


The complexities of satellite compute

“In cellular communications on the ground, this was solved a few decades ago. But doing it in space, you have to have the computing horsepower to do those handoffs as well as the throughput capability.” This additional compute needs to be in "a radiation tolerant form, and in such a way that they don't consume too much power and generate too much heat to cause massive thermal problems on the satellites." In LEO, satellites face a barrage of radiation. "It's an environment that's very rich in protons," O'Neill says. "And protons can cause upsets in configuration registers, they can even cause latch-ups in certain integrated circuits." The need to be more radiation tolerant has also pushed the industry towards newer hardware as, the smaller the process node, the lower the operating voltage. "Reducing operating voltage makes you less susceptible to destructive effects," O'Neill explains. One issue, a single event latch up, sees the satellite conduct a lot of current from power to ground through the integrated circuit, potentially frying it. ... Modern integrated circuits are a lot less susceptible to these single-event latch-ups, but are not completely immune. "While the core of the circuit may be operating at a very low voltage, 0.7 or 0.8 volts, you still have I/O circuits in the integrated circuit that may be required to interoperate with other ICs at 3.3 volts or 2.5 volts," O'Neill adds.


How CISOs can justify security investments in financial terms

A common challenge we see is the absence of a formal ERM program, or the fragmentation of risk functions, where enterprise, cybersecurity, and third-party risks are evaluated using different impact criteria. This lack of alignment makes it difficult for CISOs to communicate effectively with the C-suite and board. Standardizing risk programs and using consistent impact criteria enables clearer risk comparisons, shared understanding, and more strategic decision-making. This challenge is further exacerbated by the rise of AI-specific regulations and frameworks, including the NIST AI Risk Management Framework, the EU AI Act, the NYC Bias Audit Law, and the Colorado Artificial Intelligence Act. ... Communicating security investments in clear, business-aligned risk terms—such as High, Medium, or Low—using agreed-upon impact criteria like financial exposure, operational disruption, reputational harm, and customer impact makes it significantly easier to justify spending and align with enterprise priorities. ... In our Virtual CISO engagements, we’ve found that a risk-based, outcome-driven approach is highly effective with executive leadership. We frame cyber risk tolerance in financial and operational terms, quantify the business value of proposed investments, and tie security initiatives directly to strategic objectives. 


From fear to fluency: Why empathy is the missing ingredient in AI rollouts

In the past, teams had time to adapt to new technologies. Operating systems or enterprise resource planning (ERP) tools evolved over years, giving users more room to learn these platforms and acquire the skills to use them. Unlike previous tech shifts, this one with AI doesn’t come with a long runway. Change arrives overnight, and expectations follow just as fast. Many employees feel like they’re being asked to keep pace with systems they haven’t had time to learn, let alone trust. A recent example would be ChatGPT reaching 100 million monthly active users just two months after launch. ... This underlines the emotional and behavioral complexity of adoption. Some people are naturally curious and quick to experiment with new technology while others are skeptical, risk-averse or anxious about job security. ... Adopting AI is not just a technical initiative, it’s a cultural reset, one that challenges leaders to show up with more empathy and not just expertise. Success depends on how well leaders can inspire trust and empathy across their organizations. The 4 E’s of adoption offer more than a framework. They reflect a leadership mindset rooted in inclusion, clarity and care. By embedding empathy into structure and using metrics to illuminate progress rather than pressure outcomes, teams become more adaptable and resilient.


Why networks need AIOps and predictive analytics

Predictive Analytics – a key capability of AIOps – forecasts future network performance and problems, enabling early intervention and proactive maintenance. Further, early prediction of bottlenecks or additional requirements helps to optimise the management of network resources. For example, when organisations have advance warning about traffic surges, they can allocate capacity to prevent congestion and outages, and enhance overall network performance. A range of mundane tasks, from incident response to work order generation to network configuration to proactive IT health checks and maintenance scheduling, can be automated with AIOps to reduce the load on IT staff and free them up to concentrate on more strategic activities. ... When traditional monitoring tools were unable to identify bottlenecks in a healthcare provider’s network that was seeing a slowdown in its electronic health records (EHR) system during busy hours, a switch to AIOps resolved the problem. By enabling observability across domains, the system highlighted that performance dipped when users logged in during shift changes. It also predicted slowdowns half an hour in advance and automatically provisioned additional resources to handle the surge in activity. The result was a 70 percent reduction in the most important EHR slowdowns, improvement in system responsiveness, and freeing up of IT human resources.

Daily Tech Digest - May 24, 2025


Quote for the day:

“In my experience, there is only one motivation, and that is desire. No reasons or principle contain it or stand against it.” -- Jane Smiley



DanaBot botnet disrupted, QakBot leader indicted

Operation Endgame relies on help from a number of private sector cybersecurity companies (Sekoia, Zscaler, Crowdstrike, Proofpoint, Fox-IT, ESET, and others), non-profits such as Shadowserver and white-hat groups like Cryptolaemus. “The takedown of DanaBot represents a significant blow not just to an eCrime operation but to a cyber capability that has appeared to align Russian government interests. The case (…) highlights why we must view certain Russian eCrime groups through a political lens — as extensions of state power rather than mere criminal enterprises,” Crowdstrike commented the DanaBot disruption. ... “We’ve previously seen disruptions have significant impacts on the threat landscape. For example, after last year’s Operation Endgame disruption, the initial access malware associated with the disruption as well as actors who used the malware largely disappeared from the email threat landscape,” Selena Larson, Staff Threat Researcher at Proofpoint, told Help Net Security. “Cybercriminal disruptions and law enforcement actions not only impair malware functionality and use but also impose cost to threat actors by forcing them to change their tactics, cause mistrust in the criminal ecosystem, and potentially make criminals think about finding a different career.”


AI in Cybersecurity: Protecting Against Evolving Digital Threats

Beyond detecting threats, AI excels at automating repetitive security tasks. Tasks like patching vulnerabilities, filtering malicious traffic, and conducting compliance checks can be time-consuming. AI’s speed and precision in handling these tasks free up cybersecurity professionals to focus on complex problem-solving. ... The integration of AI into cybersecurity raises ethical questions that must be addressed. Privacy concerns are at the forefront, as AI systems often rely on extensive data collection. This creates potential risks for mishandling or misuse of sensitive information. Additionally, AI’s capabilities for surveillance can lead to overreach. Governments and corporations may deploy AI tools for monitoring activities under the guise of security, potentially infringing on individual rights. There is also the risk of malicious actors repurposing legitimate AI tools for nefarious purposes. Clear guidelines and robust governance are crucial to ensuring responsible AI deployment in cybersecurity. ... The growing role of AI in cybersecurity necessitates strong regulatory frameworks. Governments and organizations are working to establish policies that address AI’s ethical and operational challenges in this field. Transparency in AI decision-making processes and standardized best practices are among the key priorities.


Open MPIC project defends against BGP attacks on certificate validation

MPIC is a method to enhance the security of certificate issuance by validating domain ownership and CA checks from multiple network vantage points. It helps prevent BGP hijacking by ensuring that validation checks return consistent results from different geographical locations. The goal is to make it more difficult for threat actors to compromise certificate issuance by redirecting internet routes. ... Open MPIC operates through a parallel validation architecture that maximizes efficiency while maintaining security. When a domain validation check is initiated, the framework simultaneously queries all configured perspectives and collects their results. “If you have 10 perspectives, then it basically asks all 10 perspectives at the same time, and then it will collect the results and determine the quorum and give you a thumbs up or thumbs down,” Sharkov said. This approach introduces some unavoidable latency, but the implementation minimizes performance impact through parallelization. Sharkov noted that the latency is still just a fraction of a second. ... The open source nature of the project addresses a significant challenge for the industry. While large certificate authorities often have the resources to build their own solutions, many smaller CAs would struggle with the technical and infrastructure requirements of multi-perspective validation.


How to Close the Gap Between Potential and Reality in Tech Implementation

First, there has to be alignment between the business and tech sides. So, I’ve seen in many institutions that there’s not complete alignment between both. And where they could be starting, they sometimes separate and they go in opposite directions. Because at the end of the day, let’s face it, we’re all looking at how it will help ourselves. Secondly, it’s just the planning, ensuring that you check all the boxes and have a strong implementation plan. One recent customer who just joined Backbase: One of the things I loved about what they brought to the kickoff call was what success looked like to them for implementation. So, they had the work stream, whether the core integration, the call center, their data strategy, or their security requirements. Then, they had the leader who was the overall owner and then they had the other owners of each work stream. Then, they defined success criteria with the KPIs associated with those success criteria. ... Many folks forget that they are, most of the time, still running on a legacy platform. So, for me, success is when they decommission that legacy platform and a hundred percent of their members or customers are on Backbase. That’s one of the very important internal KPIs.


How AIOps sharpens cybersecurity posture in the age of cyber threats

The good news is, AIOps platforms are built to scale with complexity, adapting to new environments, users, and risks as they develop. And organizations can feel reassured that their digital vulnerabilities are safeguarded for the long term. For example, modern methods of attack, such as hyperjacking, can be identified and mitigated with AIOps. This form of attack in cloud security is where a threat actor gains control of the hypervisor – the software that manages virtual machines on a physical server. It allows them to then take over the virtual machines running on that hypervisor. What makes hyperjacking especially dangerous is that it operates beneath the guest operating systems, effectively evading traditional monitoring tools that rely on visibility within the virtual machines. As a result, systems lacking deep observability are the most vulnerable. This makes the advanced observability capabilities of AIOps essential for detecting and responding to such stealthy threats. Naturally, this evolving scope of digital malice also requires compliance rules to be frequently reviewed. When correctly configured, AIOps can support organizations by interpreting the latest guidelines and swiftly identifying the data deviations that would otherwise incur penalties.


Johnson & Johnson Taps AI to Advance Surgery, Drug Discovery

J&J's Medical Engagement AI redefines care delivery, identifying 75,000 U.S. patients with unmet needs across seven disease areas, including oncology. Its analytics engine processes electronic health records and clinical guidelines to highlight patients missing optimal treatments. A New York oncologist, using J&J's insights, adjusted treatment for 20 patients in 2024, improving the chances of survival. The platform engages over 5,000 providers, empowering medical science liaisons with real-time data. It helps the AI innovation team turn overwhelming data into an advantage. Transparent data practices and a focus on patient outcomes align with J&J's ethical standards, making this a model that bridges tech and care. ... J&J's AI strategy rests on five ethical pillars, including fairness, privacy, security, responsibility and transparency. It aims to deliver AI solutions that benefit all stakeholders equitably. The stakeholders and users understand the methods through which datasets are collected and how external influences, such as biases, may affect them. Bias is mitigated through annual data audits, privacy is upheld with encrypted storage and consent protocols, and on top of it is AI-driven cybersecurity monitoring. A training program, launched in 2024, equipped 10,000 employees to handle sensitive data. 


Surveillance tech outgrows face ID

Many oppose facial recognition technology because it jeopardizes privacy, civil liberties, and personal security. It enables constant surveillance and raises the specter of a dystopian future in which people feel afraid to exercise free speech.Another issue is that one’s face can’t be changed like a password can, so if face-recognition data is stolen or sold on the Dark Web, there’s little anyone can do about the resulting identity theft and other harms. .... You can be identified by your gait (how you walk). And surveillance cameras now use AI-powered video analytics to track behavior, not just faces. They can follow you based on your clothing, the bag you carry, and your movement patterns, stitching together your path across a city or a stadium without ever needing a clear shot of your face. The truth is that face recognition is just the most visible part of a much larger system of surveillance. When public concern about face recognition causes bans or restrictions, governments, companies, and other organizations simply circumvent that concern by deploying other technologies from a large and growing menu of options. Whether we’re IT professionals, law enforcement technologists, security specialists, or privacy advocates, it’s important to incorporate the new identification technologies into our thinking, and face the new reality that face recognition is just one technology among many.


How Ready Is NTN To Go To Scale?

Non-Terrestrial Networks (NTNs) represent a pivotal advancement in global communications, designed to extend connectivity far beyond the limits of ground-based infrastructure. By leveraging spaceborne and airborne assets—such as Low Earth Orbit (LEO), Medium Earth Orbit (MEO), and Geostationary (GEO) satellites, as well as High-Altitude Platform Stations (HAPS) and UAVs—NTNs enable seamless coverage in regions previously considered unreachable. Whether traversing remote deserts, deep oceans, or mountainous terrain, NTNs provide reliable, scalable connectivity where traditional terrestrial networks fall short or are economically unviable. This paradigm shift is not merely about extending signal reach; it’s about enabling entirely new categories of applications and industries to thrive in real time. ... A core feature of NTNs is their use of varied orbital altitudes, each offering distinct performance characteristics. Low Earth Orbit (LEO) satellites (altitudes of 500–2,000 km) are known for their low latency (20–50 ms) and are ideal for real-time services. Medium Earth Orbit (MEO) systems (2,000–35,000 km) strike a balance between coverage and latency and are often used in navigation and communications. Geostationary Orbit (GEO) satellites, positioned at ~35,786 km, provide wide-area coverage from a fixed position relative to Earth’s rotation—particularly useful for broadcast and constant-area monitoring. 


Enterprises are wasting the cloud’s potential

One major key to achieving success with cloud computing is training and educating employees. Although the adoption of cloud technology signifies a significant change, numerous companies overlook the importance of equipping their staff with the technical expertise and strategic acumen to capitalize on its potential benefits. IT teams that lack expertise in cloud services may use cloud resources inefficiently or ineffectively. Business leaders who are unfamiliar with cloud tools often struggle to leverage data-driven insights that could drive innovation. Employees relying on cloud-based applications might not fully utilize all their functionality due to insufficient training. These skill gaps lead to dissatisfaction with cloud services, and the company doesn’t benefit from its investments in cloud infrastructure. ... The cloud is a tool for transforming operations rather than just another piece of IT equipment. Companies can refine their approach to the cloud by establishing effective governance structures and providing employees with training on the optimal utilization of cloud technology. Once they engage architects and synchronize cloud efforts with business objectives, most companies will see tangible results: cost savings, system efficiency, and increased innovation.


The battle to AI-enable the web: NLweb and what enterprises need to know

NLWeb enables websites to easily add AI-powered conversational interfaces, effectively turning any website into an AI app where users can query content using natural language. NLWeb isn’t necessarily about competing with other protocols; rather, it builds on top of them. The new protocol uses existing structured data formats like RSS, and each NLWeb instance functions as an MCP server. “The idea behind NLWeb is it is a way for anyone who has a website or an API already to very easily make their website or their API an agentic application,” Microsoft CTO Kevin Scott said during his Build 2025 keynote. “You really can think about it a little bit like HTML for the agentic web.” ... “NLWeb leverages the best practices and standards developed over the past decade on the open web and makes them available to LLMs,” Odewahn told VentureBeat. “Companies have long spent time optimizing this kind of metadata for SEO and other marketing purposes, but now they can take advantage of this wealth of data to make their own internal AI smarter and more capable with NLWeb.” ... “NLWeb provides a great way to open this information to your internal LLMs so that you don’t have to go hunting and pecking to find it,” Odewahn said. “As a publisher, you can add your own metadata using schema.org standard and use NLWeb internally as an MCP server to make it available for internal use.”

Daily Tech Digest - March 02, 2025


Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr


Weak cyber defenses are exposing critical infrastructure — how enterprises can proactively thwart cunning attackers to protect us all

Weak cybersecurity isn’t merely a corporate issue — it’s a national security risk. The 2021 Colonial Pipeline attack disrupted energy supplies and exposed vulnerabilities in critical industries. Rising geopolitical tensions, especially with China, amplify these risks. Recent breaches attributed to state-sponsored actors have exploited outdated telecommunications equipment and other legacy systems, revealing how complacency in updating technology can put national security in danger. For instance, last year’s hack of U.S. and international telecommunications companies exposed phone lines used by top officials and compromised data from systems for surveillance requests, threatening national security. Weak cybersecurity at these companies risks long-term costs, allowing state-sponsored actors to access sensitive information, influence political decisions and disrupt intelligence efforts. ... No company can face today’s cyber threats on its own. Collaboration between private businesses and government agencies is more than helpful — it’s imperative. Sharing threat intelligence in real-time allows organizations to respond faster and stay ahead of emerging risks. Public-private partnerships can also level the playing field by offering smaller companies access to resources like funding and advanced security tools they might not otherwise afford.


Evaluating the CISO

Delegation skills are an essential component that should be evaluated separately in this area. Effective delegation is essential to prevent becoming a bottleneck, as micromanagement is unsuitable for the CISO role. Delegating complex tasks not only lightens your load but also helps foster the team’s overall competence. Without strong delegation skills, CISOs cannot rate themselves highly in their relationship with the internal security team. ... A CISO is hired to lead, manage, and support specific projects or programs such as migrating to a cloud or hybrid infrastructure, implementing zero-trust principles, launching security awareness initiatives, or assessing risks and creating a roadmap for post-quantum cryptography implementation. The success of these initiatives ultimately falls under the CISO’s responsibility. To execute these programs effectively, the CISO relies heavily on its team and internal organizational peers. As such, building strong relationships with both is essential for successfully delivering projects. ... A CISO must have responsibility for the information security budget, which includes funding for the team, tools, and services. Without direct control over the budget, it becomes challenging to rate the relationship with management highly, as budget ownership is a critical aspect of the CISO’s role.


Unraveling Large Language Model Hallucinations

You might have seen model hallucinations. They are the instances where LLMs generate incorrect, misleading, or entirely fabricated information that appears plausible. These hallucinations happen because LLMs do not “know” facts in the way humans do; instead, they predict words based on patterns in their training data. ... Supervised Fine-Tuning makes the model capable. However, even a well-trained model can generate misleading, biased, or unhelpful responses. Therefore, Reinforcement Learning with Human Feedback is required to align it with human expectations. We start with the assistant model, trained by SFT. For a given prompt we generate multiple model outputs. Human labelers rank or score multiple model outputs based on quality, safety, and alignment with human preferences. We use these data to train a whole separate neural network that we call a reward model. The reward model imitates human scores. It is a simulator of human preferences. It is a completely separate neural network, probably with a transformer architecture, but it is not a language model in the sense that it generates diverse language. It’s just a scoring model.


How to Communicate the Business Value of Master Data Management

In an ideal scenario, MDM is integral to a broader D&A strategy, highlighting how D&A supports the organization's strategic goals. The strategy aligns with these goals, prioritizes the business outcomes it will support, and details what is needed to achieve them. Therefore, leaders must first understand and prioritize the explicit business outcomes that MDM will support before creating an MDM strategy. In other words, "improving decision-making" is not good enough. "Increase customer service levels by 5% by end of December 2025" is the level of detail required. D&A leaders may recognize that master data is causing a problem or limiting an opportunity, which is where they would rely on an MDM. If this is the case, those D&A leaders should consider questions that help identify the problem, KPIs, and key stakeholders in these cases. These questions help identify potential business outcomes that MDM could support. Figure 1 provides a worksheet to build this initial picture and facilitate stakeholder discussions. The worksheet maps high-level goals onto a run-grow-transform framework, which could also be represented by three columns for the primary business value drivers: risk, revenue, and cost.


4 ways to get your business ready for the agentic AI revolution

Agents could be used eventually, but only once a partnership approach identifies the right opportunities. "Agents are becoming a big part of how generative AI and machine learning are used in business today. The way agents will be used in travel will be fascinating to watch. I think this technology will certainly be a part of the mix," he said. "The process for Hyatt will be to find the right technologies -- and we'll do that in close partnership with our business leaders and the technology teams that run the applications. We'll then provide the AI services to drive those transitions for the business." ... Keith Woolley, chief digital and information officer at the University of Bristol, is another digital leader who sees the potential benefits of agents. However, he said these advantages will become manifest over the longer term. "We are looking at agentic AI, but we're not implementing it yet," he said. "We sit as a management team and ask questions like, 'Should we do our admissions process using agentic AI? What would be the advantage?'" Woolley told ZDNET he could envision a situation in which AI and automation help assess and inform candidates worldwide about the status of their applications.


Cloud Giants Collaborate on New Kubernetes Resource Management Tool

The core innovation of kro is the introduction of the ResourceGraphDefinition custom resource. kro encapsulates a Kubernetes deployment and its dependencies into a single API, enabling custom end-user interfaces that expose only the parameters applicable to a non-platform engineer. This masking hides the complexity of API endpoints for Kubernetes and cloud providers that are not useful in a deployment context. ... Kro works seamlessly with the existing cloud provider Kubernetes extensions that are available to manage cloud resources from Kubernetes. These are AWS Controllers for Kubernetes (ACK), Google's Config Connector (KCC), and Azure Service Operator (ASO). kro enables standardised, reusable service templates that promote consistency across different projects and environments, with the benefit of being entirely Kubernetes-native. It is still in the early stages of development. "As an early-stage project, kro is not yet ready for production use, but we still encourage you to test it out in your own Kubernetes development environments," the post states. ... Most significantly for the Crossplane community, Farcic questioned kro's purpose given its functional overlap with existing tools. "kro is serving more or less the same function as other tools created a while ago without any compelling improvement," he observed. 


Why a different approach to AIOps is needed for SD-WAN

AIOps tools enhance efficiency by seamlessly integrating with IT management tools, enabling proactive issue identification and streamlining IT management processes. But more than that, they optimize an organization’s network by improving the performance, efficiency, and dependability of its network resources to ensure optimal user experience. Regarding infrastructure, many organizations now rely on SD-WAN – software-defined wide area network – to manage and optimize data traffic across different types of networks efficiently. SD-WAN is an effective way to connect the organization and provide users with application access. It helps businesses improve their network performance, cut costs, and be more flexible by easily connecting to various network types. ... AIOps tools use the information extracted from SD-WAN systems and autonomously resolve issues without human intervention. In other words, AIOps tools utilize predictive analytics to forecast future events or outcomes related to network operations. This makes the whole system run smoother and more reliably, while machine learning algorithms can use this historical data to make predictions and proactively improve the performance of critical applications.


AI-Driven Threat Detection and the Need for Precision

AI algorithms, particularly those based on machine learning, excel at sifting through massive datasets and identifying patterns that would be nearly impossible for us mere humans to spot. An AI system might analyze network traffic patterns to identify unusual data flows that could indicate a data exfiltration attempt. Alternatively, it could scan email attachments for malicious code that traditional antivirus software might miss. Ultimately, AI feeds on context and content. The effectiveness of these systems in protecting your security posture(link is external) is inextricably linked to the quality of the data they are trained on and the precision of their algorithms. ... Finally, AI-driven threat detection may not eradicate human expertise. Skilled security professionals should still oversee AI systems and make informed decisions based on their own contextual expertise and experience. Human oversight validates the AI's findings, and threat detection algorithms may not be able to totally replace the critical thinking and intuition of human analysts. There may come a time when human professionals exist in AI's shadow. Yet, at this time, combining the power of AI with human knowledge and a commitment to continuous learning can form the building blocks for a sophisticated defense program. 


From Ambiguity to Accountability: Analyzing Recommender System Audits under the DSA

In these early years of the DSA, a range of stakeholders – online platforms, civil society, the European Commission (EC), and national Digital Service Coordinators (DSCs) – must experiment, identify good practices, and share lessons learned. Such iteration is important to ensure an adaptive DSA regime that spurs innovation and responds to shifting technologies, risks, and mitigation strategies. The need for iteration and flexibility, however, should not mean the audits fail to deliver on their potential as vehicles for transparency and accountability. The first round of independent audits of recommender systems reveals clear areas for immediate improvement. Because the core definitions and methodologies were developed independently by platforms and auditors, significant inconsistencies exist in both risk assessment and audit processes. ... The DSA requires the main parameters of recommender systems to be spelled out in plain and intelligible language. What does this concretely mean in the recommender system context? Is it free of “acronyms or complex/technical terminology” (Pinterest), “straightforward vocabulary and easy to perceive, understand, or interpret” (Snap), or “written for a general audience with varying technical skill levels, inclusive of all users” (TikTok)? There's a subtle difference in expectations associated with each framing. These terms don’t need to be defined in a vacuum.


Cybersecurity in retail: What does the future hold?

In the coming year, cybersecurity experts predict attackers will increasingly target Generative AI models used by retailers, creating significant potential for operational disruptions and data breaches. These AI systems, now critical to retail operations, are vulnerable to sophisticated attacks that could compromise customer service efficiency and expose critical business vulnerabilities. The core risk lies in the sophisticated ways attackers can exploit AI’s complex decision-making processes, turning what was once a technological advantage into a potential security liability. Retailers must recognise that their AI systems are not just technological tools, but potential entry points for cybercriminal activities. ... The complexity and distribution of digital ecosystems make them prime targets during high-demand periods. For example, as we have seen in the past, cyberattacks that hit supply chains can cause major delays and financial loss. These incidents underscore the vulnerabilities in supply chains during peak times of the year​. In 2025, expect a rise in supply chain attacks during the holiday season, targeting ecommerce platforms and logistics providers, which could disrupt product availability and shipping.