Showing posts with label availability. Show all posts
Showing posts with label availability. Show all posts

Daily Tech Digest - April 23, 2026


Quote for the day:

“Every time you have to speak, you are auditioning for leadership.” -- James Humes

🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 19 mins • Perfect for listening on the go.


How To Navigate The New Economics Of Professionalized Cybercrime

The modern cybercrime landscape has evolved into a professionalized industry where attackers prioritize precision and severity over volume. According to recent data, while the frequency of material claims has decreased, the average cost per ransomware incident has surged, signaling a shift toward more efficient targeting. This new economic reality is defined by three primary trends: the rise of data-theft extortion, the prevalence of identity attacks, and the long-tail financial consequences that follow a breach. Because businesses have improved their backup and recovery systems, criminals have pivoted from simple encryption to threatening the exposure of sensitive data, often leveraging AI to analyze stolen information for maximum leverage. Furthermore, the professionalization of these threats extends to supply chain vulnerabilities, where a single vendor compromise can cause cascading losses across thousands of downstream clients. Consequently, cyber incidents are no longer isolated technical failures but material enterprise risks with financial repercussions lasting years. To navigate this environment, organizational leaders must shift their focus from mere operational recovery to robust data exfiltration prevention. CISOs, CFOs, and CROs must collaborate to integrate cyber risk into broader enterprise frameworks, ensuring that financial planning and security investments account for the multi-year legal, regulatory, and reputational exposures that now characterize the threat landscape.


How Agentic AI is transforming the future of Indian healthcare

Agentic AI represents a transformative shift in the Indian healthcare landscape, transitioning from passive data analysis to autonomous, goal-oriented systems that proactively manage patient care. Unlike traditional AI, which primarily focuses on reporting, agentic systems independently execute tasks such as triaging, scheduling, and continuous monitoring to address India’s strained doctor-to-patient ratio. By integrating these intelligent agents, medical facilities can streamline outpatient visits—from digital symptom recording to automated post-consultation follow-ups—significantly reducing the administrative burden on overworked clinicians. The technology is particularly vital for chronic disease management, where it provides timely nudges for medication adherence and identifies early warning signs before they escalate into emergencies. Furthermore, Agentic AI acts as a crucial support layer for frontline health workers in rural regions, bridging the clinical knowledge gap through real-time protocol guidance and decision support. While these advancements offer a scalable solution for public health, the article emphasizes that human empathy remains irreplaceable. Successful adoption requires robust frameworks for data privacy and ethical transparency, ensuring that physicians always retain final decision-making authority. Ultimately, by evolving from a mere tool into essential digital infrastructure, Agentic AI is poised to democratize access and foster a more responsive, patient-centric healthcare ecosystem across the diverse Indian population.


What a Post-Commercial Quantum World Could Look Like

The article "What a Post-Commercial Quantum World Could Look Like," published by The Quantum Insider, explores a future where quantum computing has moved beyond its initial commercial hype into a phase of deep integration and stabilization. In this post-commercial era, the focus shifts from the race for "quantum supremacy" toward the practical, ubiquitous application of quantum technologies across global infrastructure. The piece suggests that once the technology matures, it will cease to be a standalone industry of speculative startups and instead become a foundational utility, much like the internet or electricity today. Key impacts include a complete transformation of cybersecurity through quantum-resistant encryption and the optimization of complex systems in logistics, materials science, and drug discovery that were previously unsolvable. This transition will likely lead to a "quantum divide," where geopolitical and economic power is concentrated among those who have successfully integrated these capabilities into their national security and industrial frameworks. Ultimately, the article paints a picture of a world where quantum mechanics no longer represents a frontier of experimental physics but serves as the silent, invisible engine driving high-performance global economies and ensuring long-term technological resilience.


Continuous AI biometric identification: Why manual patient verification is not enough!

The article explores the critical transition from manual patient verification to continuous AI-powered biometric identification in modern healthcare. Traditional methods, such as verbal confirmations and physical wristbands, are increasingly deemed insufficient due to their susceptibility to human error and data entry inconsistencies, which often lead to fragmented medical records and life-threatening mistakes. To address these vulnerabilities, the industry is shifting toward a model of constant identity assurance using advanced technologies like facial biometrics, behavioral signals, and passive authentication. This continuous approach ensures real-time validation across all clinical touchpoints, significantly reducing the risks associated with duplicate electronic health records — currently estimated at 8-12% of total files. Furthermore, the integration of agentic AI and multimodal systems — combining fingerprints, voice, and device data — creates a secure identity layer that streamlines clinical workflows and protects patients from misidentification. With the healthcare biometrics market projected to reach $42 billion by 2030, the article argues that automating identity verification is no longer optional. Ultimately, by replacing episodic manual checks with autonomous, intelligent monitoring, healthcare organizations can enhance data integrity, safeguard financial interests against identity fraud, and, most importantly, ensure the highest standards of safety for the individuals in their care.


The 4 disciplines of delivery — and why conflating them silently breaks your teams

In his article for CIO, Prasanna Kumar Ramachandran argues that enterprise success depends on maintaining four distinct delivery disciplines: product management, technical architecture, program management, and release management. Each domain addresses a fundamental question that the others are ill-equipped to answer. Product management defines the "what" and "why," establishing the strategic vision and priorities. Technical architecture translates this into the "how," determining structural feasibility and sequence. Program management orchestrates the delivery timeline by managing cross-team dependencies, while release management ensures safe, compliant deployment to production. Organizations frequently stumble by treating these roles as interchangeable or asking a single team to bridge all four. This conflation "silently breaks" teams because it forces experts into roles outside their core competencies. For instance, an architect focused on product decisions might prioritize technical elegance over market needs, while program managers might sequence work based on staff availability rather than strategic value. When these boundaries blur, the result is often wasted effort, missed dependencies, and a fundamental misalignment between technical output and business goals. By clearly delineating these responsibilities, leaders can prevent operational friction and ensure that every capability delivered actually reaches the customer safely and generates measurable impact.


Teaching AI models to say “I’m not sure”

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a novel training technique called Reinforcement Learning with Calibration Rewards (RLCR) to address the issue of AI overconfidence. Modern large language models often deliver every response with the same level of certainty, regardless of whether they are correct or merely guessing. This dangerous trait stems from standard reinforcement learning methods that reward accuracy but fail to penalize misplaced confidence. RLCR fixes this flaw by teaching models to generate calibrated confidence scores alongside their answers. During training, the system is penalized for being confidently wrong or unnecessarily hesitant when correct. Experimental results demonstrate that RLCR can reduce calibration errors by up to 90 percent without sacrificing accuracy, even on entirely new tasks the models have never encountered. This advancement is particularly significant for high-stakes applications in medicine, law, and finance, where human users must rely on the AI’s self-assessment to determine when to seek a second opinion. By providing a reliable signal of uncertainty, RLCR transforms AI from an unshakable but potentially deceptive voice into a more trustworthy tool that explicitly communicates its own limitations, ultimately enhancing safety and reliability in complex decision-making environments.


Are you paying an AI ‘swarm tax’? Why single agents often beat complex systems

The VentureBeat article discusses a "swarm tax" paid by enterprises that over-engineer AI systems with complex multi-agent architectures. Recent Stanford University research reveals that single-agent systems often match or even outperform multi-agent swarms when both are allocated an equivalent "thinking token budget." The perceived superiority of swarms frequently stems from higher total computation during testing rather than inherent structural advantages. This "tax" manifests as increased latency, higher costs, and greater technical complexity. A primary reason for this performance gap is the "Data Processing Inequality," where critical information is often lost or fragmented during the handoffs and summarizations required in multi-agent orchestration. In contrast, a single agent maintains a continuous context window, allowing for much more efficient information retention and reasoning. The study suggests that developers should prioritize optimizing single-agent models—using techniques like SAS-L to extend reasoning—before adopting multi-agent frameworks. Swarms remain useful only in specific scenarios, such as when a single agent’s context becomes corrupted by noisy data or when a task is naturally modular and requires parallel processing. Ultimately, the article advocates for a "single-agent first" approach, warning that unnecessary architectural bloat can lead to diminishing returns and inefficient resource utilization in enterprise AI deployments.


Cloud tech outages: how the EU plans to bolster its digital infrastructure

The recent global outages involving Amazon Web Services in late 2025 and CrowdStrike in 2024 have underscored the extreme fragility of modern digital infrastructure, which remains heavily reliant on a small group of U.S.-based hyperscalers. These disruptions revealed that the perceived redundancy of cloud computing is often an illusion, as many organizations concentrate their primary and backup systems within the same provider's ecosystem. Consequently, the European Union is shifting its strategy from mere technical efficiency to a geopolitical pursuit of "digital sovereignty." To mitigate the risks of "digital colonialism" and the reach of the U.S. CLOUD Act, European leaders are championing the 2025 European Digital Sovereignty Declaration. This framework prioritizes the development of a federated cloud architecture, linking national nodes into a cohesive, secure network to reduce dependence on foreign monopolies. Furthermore, the EU is investing heavily in homegrown semiconductors, foundational AI models, and public digital infrastructure. By establishing a dedicated task force to monitor progress through 2026, the bloc aims to ensure that European data remains subject strictly to local jurisdiction. This comprehensive approach seeks to bolster resilience against future technical failures while securing the strategic autonomy necessary for Europe’s long-term digital and economic security.


When a Cloud Region Fails: Rethinking High Availability in a Geopolitically Unstable World

In the InfoQ article "When a Cloud Region Fails," Rohan Vardhan introduces the concept of sovereign fault domains (SFDs) to address cloud resilience within an increasingly unstable geopolitical landscape. While traditional high-availability strategies focus on technical abstractions like multi-availability zone (multi-AZ) deployments to mitigate hardware failures, Vardhan argues these are insufficient against sovereign-level disruptions. SFDs represent failure boundaries defined by legal, political, or physical jurisdictions. Recent events, such as sudden cloud provider withdrawals or infrastructure instability in conflict zones, demonstrate how geopolitical shifts can trigger correlated failures across entire regions, rendering standard multi-AZ setups ineffective. To combat these risks, architects must shift their baseline for high availability from multi-AZ to multi-region architectures. This transition requires a fundamental rethink of distributed systems, moving beyond technical redundancy to include legal and political considerations in data replication and traffic management. The article advocates for the adoption of explicit region evacuation playbooks, the definition of geopolitical recovery targets, and the expansion of chaos engineering to simulate sovereign-level losses. Ultimately, achieving true resilience in the modern world necessitates acknowledging that cloud regions are physical and political assets, not just virtualized resources, requiring intentional design to survive jurisdictional partitions.


Inside Caller-as-a-Service Fraud: The Scam Economy Has a Hiring Process

The BleepingComputer article explores the emergence of "Caller-as-a-Service," a professionalized vishing ecosystem where cybercrime syndicates mirror the organizational structure of legitimate businesses. These industrialized fraud operations utilize a clear division of labor, employing specialized roles such as infrastructure operators, data analysts, and professional callers. Recruitment for these positions is surprisingly formal; underground job postings resemble professional LinkedIn ads, specifically seeking native English speakers with high emotional intelligence and persuasive social engineering skills. To establish credibility, recruiters often display verifiable "proof-of-profit" via large cryptocurrency balances to entice new talent. Once hired, callers are frequently subjected to real-time supervision through screen sharing to ensure strict adherence to malicious scripts and maximize victim conversion rates. Compensation models are equally sophisticated, ranging from fixed weekly salaries of $1,500 to success-based commissions of $1,000 per successful vishing hit. This service-driven model significantly lowers the barrier to entry for criminals, as it allows them to outsource the technical and interpersonal complexities of a cyberattack. Ultimately, the article emphasizes that the professionalization of the scam economy makes these threats more resilient and efficient, necessitating that defenders implement more robust identity verification and multi-factor authentication to protect individuals from these increasingly coordinated, data-driven vishing campaigns.

Daily Tech Digest - November 17, 2025


Quote for the day:

"Keep steadily before you the fact that all true success depends at last upon yourself." -- Theodore T. Hunger



You already use a software-only approach to passkey authentication - why that matters

After decades of compromises, exfiltrations, and financial losses resulting from inadequate password hygiene, you'd think that we would have learned by now. However, even after comprehensive cybersecurity training, research shows that 98% of users are still easily tricked into divulging their passwords to threat actors. Realizing that hope -- the hope that users will one day fix their password management habits -- is a futile strategy to mitigate the negative consequences of shared secrets, the tech industry got together to invent a new type of login credential. The passkey doesn't involve a shared secret, nor does it require the discipline or the imagination of the end user. Unfortunately, passkeys are not as simple to put into practice as passwords, which is why a fair amount of education is still required. ... Passkeys still involve a secret. But unlike passwords, users just have no way of sharing it -- not with legitimate relying parties and especially not with threat actors. ... In most situations where users are working with passkeys but not using one of the platform authenticators, they'll most likely be working with a virtual authenticator. These are essentially BYO authenticators, none of which rely on the device's underlying security hardware for any passkey-related public key cryptography or encryption tasks, unlike platform authenticators.


Getting started with agentic AI

A working agentic AI strategy relies on AI agents connected by a metadata layer, whereby people understand where and when to delegate certain decisions to the AI or pass work to external contractors. It’s a focus on defining the role of the AI and where people involved in the workflow need to contribute. ... Data lineage tracking should happen at the code level through metadata propagation systems that tag every data transformation, model inference and decision point with unique identifiers. Willson says this creates an immutable audit trail that regulatory frameworks increasingly demand. According to Willson, advanced implementations may use blockchain-like append-only logs to ensure governance data cannot be retroactively modified. ... One of the areas IT leaders need to consider is that their organisation will more than likely rely on a number of AI models to support agentic AI workflows.  ... Organisations need to have the right data strategy in place, and they should already be well ahead on their path to full digitisation, where automation through RPA is being used to connect many disparate workflows. Agentic AI is the next stage of this automation, where an AI is tasked with making decisions in a way that would have previously been too clunky using RPA. However, automation of workflows and business processes are just pieces of an overall jigsaw. 


Human-centric IAM is failing: Agentic AI requires a new identity control plane

Agentic AI does not just use software; it behaves like a user. It authenticates to systems, assumes roles and calls APIs. If you treat these agents as mere features of an application, you invite invisible privilege creep and untraceable actions. A single over-permissioned agent can exfiltrate data or trigger erroneous business processes at machine speed, with no one the wiser until it is too late. The static nature of legacy IAM is the core vulnerability. You cannot pre-define a fixed role for an agent whose tasks and required data access might change daily. The only way to keep access decisions accurate is to move policy enforcement from a one-time grant to a continuous, runtime evaluation. ... Securing this new workforce requires a shift in mindset. Each AI agent must be treated as a first-class citizen within your identity ecosystem. First, every agent needs a unique, verifiable identity. This is not just a technical ID; it must be linked to a human owner, a specific business use case and a software bill of materials (SBOM). The era of shared service accounts is over; they are the equivalent of giving a master key to a faceless crowd. Second, replace set-and-forget roles with session-based, risk-aware permissions. Access should be granted just in time, scoped to the immediate task and the minimum necessary dataset, then automatically revoked when the job is complete. Think of it as giving an agent a key to a single room for one meeting, not the master key to the entire building.


Don’t ignore the security risks of agentic AI

We need policy engines that understand intent, monitor behavioral drift and can detect when an agent begins to act out of character. We need developers to implement fine-grained scopes for what agents can do, limiting not just which tools they use, but how, when and under what conditions. Auditability is also critical. Many of today’s AI agents operate in ephemeral runtime environments with little to no traceability. If an agent makes a flawed decision, there’s often no clear log of its thought process, actions or triggers. That lack of forensic clarity is a nightmare for security teams. In at least some cases, models resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals—including blackmailing officials and leaking sensitive information to competitors Finally, we need robust testing frameworks that simulate adversarial inputs in agentic workflows. Penetration-testing a chatbot is one thing; evaluating an autonomous agent that can trigger real-world actions is a completely different challenge. It requires scenario-based simulations, sandboxed deployments and real-time anomaly detection. ... Until security is baked into the development lifecycle of agentic AI, rather than being patched on afterward, we risk repeating the same mistakes we made during the early days of cloud computing: excessive trust in automation before building resilient guardrails.


How Technological Continuity and High Availability Strengthen IT Resilience in Critical Sectors

Within the context of business continuity, high availability ensures technology supports the organization’s ability to operate without disruption. It minimizes downtime and maintains the confidentiality, integrity, and availability of information. ... To achieve true high availability, organizations implement architectures that combine redundancy, automation, and fault tolerance. Database replication whether synchronous or asynchronous allows data to be duplicated across primary and secondary nodes, ensuring continuous access in the event of a failure. Synchronous replication guarantees data consistency but introduces latency, while asynchronous models reduce latency at the expense of a small data gap. Both approaches, when properly configured, strengthen the integrity and continuity of critical databases. ... One of the most effective strategies to reduce technological dependence is the implementation of hybrid continuity models that integrate both on-premises and cloud environments. Organizations that rely exclusively on a single cloud service provider expose themselves to the risk of total outage if that provider experiences downtime or disruption. By maintaining mirrored environments between cloud infrastructure and local servers, it is possible to achieve operational flexibility and independence across channels.


The tech that turns supply chains from brittle to unbreakable

When organizations begin crafting a supply chain strategy, one of the most common misconceptions is viewing it as purely a logistics exercise rather than a holistic framework that spans procurement, planning and risk management. Another frequent misstep is underestimating the role of technology. Digital tools are essential for visibility, predictive analytics and automation, not optional. Equally critical is recognizing that strategy is not static, it must evolve continuously to address shifting market conditions and emerging threats. ... Resilience comes from treating cyber and physical risks as one integrated challenge. That means embedding security into every layer of the supply chain, from vendor onboarding to logistics execution, while leveraging advanced visibility tools and zero trust principles. ... Executive buy‑in for resilience investments begins with reframing the conversation from cost to value. We position resilience as a strategic enabler rather than an expense by linking it to business continuity, customer trust and competitive advantage. Instead of focusing solely on immediate ROI, emphasize measurable risk reduction, regulatory compliance and the cost of inaction during disruptions. Use real‑world scenarios and data to show how resilience safeguards revenue streams and accelerates recovery when crises hit. Engage executives early, align initiatives with corporate objectives and present resilience as a driver of long‑term growth and brand reputation.


ISO and ISMS: 9 reasons security certifications go wrong

Without management’s commitment, it’s often difficult to get all employees on board and ensure that ISO standards, or even IT baseline protection standards, are integrated into daily business operations. As a result, companies should provide top-down clarity about the importance of such initiatives — even if implementation can be costly and inconvenient. “Cleaning up” isn’t always pleasant, but the result is all the more worthwhile. ... Without genuine integration into daily operations, the certification becomes useless, and the benefits it offers remain unrealized. In the worst-case scenario, organizations even end up losing money, while also missing out on the implementation’s potential value. When integrating a management system, it’s important not to get bogged down in details. The practical application of the system in real-world work situations is crucial for its success. ... Employees need to understand why the implementation is important, how it will be integrated into their daily workflows, and how it will make their work easier. If this isn’t the case, it will be difficult to implement the system and maintain any resulting certification. ... Without a detailed plan, companies focus on areas that are irrelevant or do not meet the requirements of the ISO/IT baseline protection standards. Furthermore, if the implementation of a management system takes too long, regular business development can overtake the process itself, resulting in duplicated work to keep up with changes.


State of the API 2025: API Strategy Is Becoming AI Strategy

What distinguishes fully API-first teams? They treat APIs as long-lived products with roadmaps, SLAs, versioning, and feedback loops. They align product and engineering early, embed governance into workflows, and standardize patterns so that consumers, human or agent, can rely on consistent contracts. In our experience, that "productization" of APIs is what unlocks long-lived, reusable APIs and parallel delivery. When your agents can trust your schemas, error semantics, and rate-limit behaviors, they can compose capabilities far faster than code-level abstractions ever could. ... As AI agents become primary API consumers, security assumptions must evolve. 51% of developers cite unauthorized or excessive agent calls as a top concern; 49% worry about AI systems accessing sensitive data they shouldn't; and 46% highlight the risk of credential leakage and over-scoped keys. Traditional controls, designed for predictable human traffic, struggle against machine-speed persistence, long-running automation, and credential amplification. ... Even as API-first adoption grows, collaboration remains a bottleneck. 93% of teams report challenges such as inconsistent documentation, duplicated work, and difficulty discovering existing APIs. With 69% of respondents spending 10+ hours per week on API-related tasks, and with a global workforce, asynchronous collaboration is the norm. 


Embedded Intelligence: JK Tyre's Smart Tyre Use Case

Unlike traditional valve-mounted tire pressure monitoring devices, or TPMS, these sensors are permanently integrated for consistent data accuracy. Each chip is designed to last five to seven years, depending on usage and conditions. "These sensors are permanently embedded during the assembly process," said V.K. Misra, technical director at JK Tyre. "They continuously send live data on air pressure and temperature to the dashboard and mobile device. The moment there's a variation, the driver is alerted before a small problem becomes a serious risk." ... The embedded version takes this further by integrating the chip within the tire's internal structure, creating a closed feedback loop between the tire, the driver and the cloud. "We have created an entire connected ecosystem," Misra said. "The tire is just the beginning. The data generated feeds predictive models for maintenance and safety. Through Treel, our platform can now talk to vehicles, drivers and service networks simultaneously." The Treel platform processes sensor data through APIs and cloud analytics, providing actionable insights for drivers and fleet operators. Over time, this data contributes to predictive maintenance models, product design improvements and operational analytics for connected vehicles. ... "AI allows decisions that earlier took days to happen within minutes," Misra said. "It also provides valuable data on wear patterns and helps us improve quality control across plants."


Regulation gives structure and voice to security leaders: Darshan Chavan

Chavan has witnessed a remarkable shift over the past decade in how businesses view cybersecurity. ... The increased visibility of cybersecurity, he says, has given CISOs a strategic voice. “Frequent regulatory updates, data breaches in the news, and rising public awareness have made organisations realize that cybersecurity is fundamental to business continuity,” he explains. “Every organisation now understands that to operate in a fast-evolving digital landscape, you need a cybersecurity leader with authority — and frameworks, regulations, and policies that are implemented and accepted by the business.” He views cybersecurity guidelines — whether from SEBI, RBI, or other regulatory bodies — as empowering rather than restrictive. “Regulation gives structure and voice to security leaders,” he says. “It ensures that cybersecurity is treated not as a cost centre but as a core enabler of business trust.” ... While he acknowledges that the DPDP Act will help formalise this journey, he refuses to wait for regulation to act. “I’m not waiting for the law to push me,” he says. “Tomorrow, investors will start asking how we manage their data, how we protect their bank account numbers, and how we ensure confidentiality. I want to be ready before those questions arise.” Beyond data privacy, Chavan highlights network defense and layered security as ongoing imperatives. “

Daily Tech Digest - October 23, 2025


Quote for the day:

“The more you loose yourself in something bigger than yourself, the more energy you will have.” -- Norman Vincent Peale



Leadership lessons from NetForm founder Karen Stephenson

Co-creation is a hot buzzword encouraging individuals to integrate and create with each other, but the simplest way to integrate and create is in the mind of one person — if they’re willing to push forward and do it. Even further, what can an integrated team of diverse minds accomplish when they co-create? ... In the age of AI, humans will need to focus on what humans do well. At the moment, at least, that’s making novel connections, thinking by analogy and creating the new. Our single-field approach to learning, qualifications and career ladders makes it hard for us to compete with machines that are often smarter than we are in any given discipline. For that creative spark and to excel at what messy, forgetful, slow, imperfect humans do best, we need to work, think and live differently. In fact, the founders of five of the largest companies in the world are (or were) polymaths — mentally diverse people skilled in multiple disciplines — Bill Gates, Steve Jobs, Warren Buffett, Larry Page and Jeff Bezos. They learn because they’re curious and want to solve problems, not for a career ladder. It’s easier than ever, today, to learn with AI and online materials and to collaborate with tech and humans around the world. All you need to do is open inward to your talents and desires, explore, collect and fuse.


Why cloud and AI projects take longer and how to fix the holdups

In the case of the cloud, the problem is that senior management thinks that the cloud is always cheaper, that you can always cut costs by moving to the cloud. This is despite the recent stories on “repatriation,” or moving cloud applications back into the data center. In the case of cloud projects, most enterprise IT organizations now understand how to assess a cloud project for cost/benefit, so most of the cases where impossible cost savings are promised are caught in the planning phase. For AI, both senior management and line department management have high expectations with respect to the technology, and in the latter case may also have some experience with AI in the form of as-a-service generative AI models available online. About a quarter of these proposals quickly run afoul of governance policies because of problems with data security, and half of this group dies at this point. For the remaining proposals, there is a whole set of problems that emerge. Most enterprises admit that they really don’t understand what AI can do, which obviously makes it hard to frame a realistic AI project. The biggest gap identified is between an AI business goal and a specific path leading to it. One CIO calls the projects offered by user organizations as “invitations to AI fishing trips” because the goal is usually set in business terms, and these would actually require a project simply to identify how the stated goal could be achieved.


Who pays when a multi-billion-dollar data center goes down?

While the Lockton team is looking at everything from immersion cooling to drought, there are a handful of risks where it feels the industry isn't adequately preparing. “The big thing that isn't getting on people's radars in a growing way is customer equipment," Hayhow says “Looking at this through the lens of the data center owner or developer, it's often very difficult. “It's a bit of an unspoken conversation that the equipment in the white space belongs to the customer. Often you don't have custody over it, you don't have visibility over it, and it’s highly proprietary. But the value of it is growing.” Per square meter of white space, the Lockton partner suggests that the value of the equipment five years from now will be exponentially larger than the value of the equipment five years ago, as more data centers invest in expensive GPUs and other equipment for AI use cases. “Leases have become clearer in terms of placing responsibility for damage to customer equipment more squarely on the shoulders of the owner, developer,” Hayhow says. “We're having that conversation in the US, where the halls are larger, the value of the equipment is greater, and some of the hyperscale customers are being much more prescriptive in terms of wanting to address the topic of damage to our equipment … if you lose 20 megawatts worth of racks of Nvidia chips, the lead time to get those replaced, unless you're building elsewhere, is quite significant.”


AI Agents Need Security Training – Just Like Your Employees

“It may not be as candid as what humans would do during those sessions, but AI agents used by your workforce do need to be trained. They need to understand what your company policies are, including what is acceptable behavior, what data they're allowed to access, what actions they're allowed to take,” Maneval explained. ... “Most AI tools are just trained to do the same thing over and over and so it means decisions are based on assumptions from limited information,” she explained to Infosecurity. “Additionally, most AI tools solve real problems but also create real risks and each solve different problems and creates different risks.” While some cybersecurity experts argue that auditing AI tools is no different to auditing any other software or application, Maneval disagrees. ... Maneval’s said her “rule of thumb” is that whether you’re dealing with traditional machine learning algorithms, generative AI applications of AI agents, “treat them like any other employees.” This not only means that AI-powered agents should be trained on security policies but should also be forced to respect security controls that the staff have to respect, such as role-based access controls (RBAC). “You should look at how you treat your humans and apply those same controls to the AI. You probably do a background check before anyone is hired. Do the same thing with your AI agent. ..."


Why must CISOs slay a cyber dragon to earn business respect?

Why should a security leader need to experience a major cyber incident to earn business colleagues’ respect? Jeff Pollard, VP and principal analyst at Forrester, says this enterprise perception problem is “just part of human nature. If we don’t see the bad thing happening, we don’t appreciate all of the things that were done to prevent that bad thing from happening.” Of course, if an attack turns into an incident and defense goes poorly, “it can easily turn from a hero moment to a scapegoat moment,” Pollard says. Oberlaender, who now works as a cybersecurity consultant, is among those who believe hard-earned experience should be rewarded, but that’s not what he’s seeing in the market today. ... CISOs “feel that they need to fight off an attack to show value, but there are many other successes they can do and show,” says Erik Avakian, technical counselor at Info-Tech Research Group. “Building KPIs is a powerful way to show their value.” ... Chris Jackson, a senior cybersecurity specialist with tech education vendor Pluralsight, reinforces the frustration that many enterprise CISOs feel about the lack of appropriate respect from their colleagues and bosses. “CISOs are a lot like pro sports coaches. It doesn’t matter how well they performed during the season or how many games they won. If they don’t win the championship, it’s seen as a failure, and the coach is often the first to go,” Jackson says. 


The next cyber crisis may start in someone else’s supply chain

Organizations have improved oversight of their direct partners, but few can see beyond the first layer. This limited view leaves blind spots that attackers can exploit, particularly through third-party software or service providers. “We’re in a new generation of risk, one where cyber, geopolitical, technology, political risk, and other factors are converging and reshaping the landscape. The impact on markets and operations is unfolding faster than many organizations can keep up,” said Jim Wetekamp, CEO of Riskonnect. ... Third-party and nth-party risks continue to expose companies to disruption. Most organizations have business continuity plans for supplier disruptions, but their monitoring often stops at direct partners. Only a small fraction can monitor risks across multiple tiers of their supply chain, and some cannot track their critical technology providers at all. Organizations still underestimate how dependent they are on third parties and continue to rely on paper-based continuity plans that offer a false sense of security. ... More companies now have a chief risk officer, but funding for technology and tools has barely moved. Most risk leaders say their budgets have stayed the same even as they are asked to cover more ground. Many are turning to automation and specialized software to do more with what they already have.


Boardroom to War Room: Translating AI-Driven Cyber Risk into Action

Great CISOs today combine strategic leadership, financial knowledge, technological skills, and empathy to turn cybersecurity from a burden on operations into a strong enabler. This change happens faster with artificial intelligence. AI has a lot of potential, but it also makes things more uncertain. It can do things like forecast threats and automate orchestration. CISOs need to see AI problems as more than just technological problems; they need to see them as business risks that need clear communication, openness, and quick response. ... Not storytelling, but data and graphics win over executives. Suggested metrics include: Predictive accuracy - The percentage of risks that AI flagged before a breach compared to the percentage of threats that AI flagged after it happened; Speed of reaction - The average time it took for AI-enabled confinement to work compared to manual reaction; False positive rate - Tech teams employed AI to improve alerts and cut down on alert fatigue from X to Y; Third-party model risk - The number of outside model calls that were looked at and accepted; Visual callout suggestion - A mock-up of a dashboard that illustrates AI risk KPIs, a trendline of predictive value, and a drop in incidences. ... Change from being an IT responder who reacts to problems to a strategic AI-enabled risk leader. Take ownership of your AI risk story, keep an eye on third-party models, provide your board clear information, and make sure your war room functions quickly.


Govt. faces questions about why US AWS outage disrupted UK tax office and banking firms

“The narrative of bigger is better and biggest is best has been shown for the lie it always has been,” Owen Sayers, an independent security architect and data protection specialist with a long history of working in the public sector, told Computer Weekly. “The proponents of hyperscale cloud will always say they have the best engineers, the most staff and the greatest pool of resources, but bigger is not always better – and certainly not when countries rely on those commodity global services for their own national security, safety and operations. “Nationally important services must be recognised as best delivered under national control, and as a minimum, the government should be knocking on AWS’s door today and asking if they can in fact deliver a service that guarantees UK uptime,” he said. “Because the evidence from this week’s outage suggests that they cannot.” ... “In light of today’s major outage at Amazon Web Services … why has HM Treasury not designated Amazon Web Services or any other major technology firm as a CTP for the purposes of the Critical Third Parties Regime,” asked Hillier, in the letter. “[And] how soon can we expect firms to be brought into this regime?” Hillier also asked HM Treasury for clarification about whether or not it is concerned about the fact that “seemingly key parts of our IT infrastructure are hosted abroad” given the outage originated from a US-based AWS datacentre region but impacted the activities of Lloyds Bank and also HMRC.


Quantum work, federated learning and privacy: Emerging frontiers in blockchain research

It is possible to have a future in which the field of quantum computation could serve as the foundation for blockchain consensus. The future is alluring; quantum algorithms can provide solutions to the issues that classical computers find difficult and the method may be more effective and resistant to brute-force attacks. The danger, however, is significant: when quantum computers are sufficiently robust, existing encryption standards can be compromised. ... Federated learning is another upcoming element of blockchain studies, a machine learning model training technique that avoids data centralisation. Federated learning enables various devices or nodes to feed into a standard model instead of storing sensitive data in a central server inaccessible to third parties. ... The issue of privacy is of specific importance today due to the increased regulatory pressure on exchanges and cryptocurrency companies. A compromise between user privacy and regulatory openness could prove to be the key to success. Studies of privacy-saving instruments provide a competitive advantage to blockchain developers and for exchanges interested in increasing their influence on the global economy. ... The decade of blockchain research to come will not be characterised by fast transactions or cheaper costs. It will redraw the borders of trust, calculation, and privacy in digitally based economies. 


Ransomware groups surge as automation cuts attack time to 18 mins

The ransomware group LockBit has recently introduced "LockBit 5.0", reportedly incorporating artificial intelligence for attack randomisation and enhanced targeting options, with a focus on regaining its previous position atop the ransomware ecosystem. Medusa, by contrast, was noted to have fallen behind due in part to lacking widespread automated and customisable features, despite previous activity levels. ReliaQuest's analysis predicts the rise of new groups through the lens of its three-factor model, specifically naming "The Gentlemen" and "DragonForce" as likely to become major threats due to their adoption of advanced technical capabilities. The Gentlemen, for instance, has listed over 30 victims on its data-leak site within its first month of activity, underpinned by automation, prioritised encryption, and endpoint discovery for rapid lateral movement. Conversely, groups such as "Chaos" and "Nova" are likely to remain minor players, lacking the integral features associated with higher victim counts and affiliate recruitment. ... RaaS groups now use automation to reduce breakout times to as little as 18 minutes, making manual intervention too slow. Implement automated containment and response plays to keep pace with attackers. These workflows should automatically isolate hosts, block malicious files, and disable compromised accounts quickly after a critical detection, containing the threat before ransomware can be deployed.

Daily Tech Digest - July 02, 2025


Quote for the day:

"Success is not the absence of failure; it's the persistence through failure." -- Aisha Tyle


How cybersecurity leaders can defend against the spur of AI-driven NHI

Many companies don’t have lifecycle management for all their machine identities and security teams may be reluctant to shut down old accounts because doing so might break critical business processes. ... Access-management systems that provide one-time-use credentials to be used exactly when they are needed are cumbersome to set up. And some systems come with default logins like “admin” that are never changed. ... AI agents are the next step in the evolution of generative AI. Unlike chatbots, which only work with company data when provided by a user or an augmented prompt, agents are typically more autonomous, and can go out and find needed information on their own. This means that they need access to enterprise systems, at a level that would allow them to carry out all their assigned tasks. “The thing I’m worried about first is misconfiguration,” says Yageo’s Taylor. If an AI agent’s permissions are set incorrectly “it opens up the door to a lot of bad things to happen.” Because of their ability to plan, reason, act, and learn AI agents can exhibit unpredictable and emergent behaviors. An AI agent that’s been instructed to accomplish a particular goal might find a way to do it in an unanticipated way, and with unanticipated consequences. This risk is magnified even further, with agentic AI systems that use multiple AI agents working together to complete bigger tasks, or even automate entire business processes. 


The silent backbone of 5G & beyond: How network APIs are powering the future of connectivity

Network APIs are fueling a transformation by making telecom networks programmable and monetisable platforms that accelerate innovation, improve customer experiences, and open new revenue streams.  ... Contextual intelligence is what makes these new-generation APIs so attractive. Your needs change significantly depending on whether you’re playing a cloud game, streaming a match, or participating in a remote meeting. Programmable networks can now detect these needs and adjust dynamically. Take the example of a user streaming a football match. With network APIs, a telecom operator can offer temporary bandwidth boosts just for the game’s duration. Once it ends, the network automatically reverts to the user’s standard plan—no friction, no intervention. ... Programmable networks are expected to have the greatest impact in Industry 4.0, which goes beyond consumer applications. ... 5G combined IOT and with network APIs enables industrial systems to become truly connected and intelligent. Remote monitoring of manufacturing equipment allows for real-time maintenance schedule adjustments based on machine behavior. Over a programmable, secure network, an API-triggered alert can coordinate a remote diagnostic session and even start remedial actions if a fault is found.


Quantum Computers Just Reached the Holy Grail – No Assumptions, No Limits

A breakthrough led by Daniel Lidar, a professor of engineering at USC and an expert in quantum error correction, has pushed quantum computing past a key milestone. Working with researchers from USC and Johns Hopkins, Lidar’s team demonstrated a powerful exponential speedup using two of IBM’s 127-qubit Eagle quantum processors — all operated remotely through the cloud. Their results were published in the prestigious journal Physical Review X. “There have previously been demonstrations of more modest types of speedups like a polynomial speedup, says Lidar, who is also the cofounder of Quantum Elements, Inc. “But an exponential speedup is the most dramatic type of speed up that we expect to see from quantum computers.” ... What makes a speedup “unconditional,” Lidar explains, is that it doesn’t rely on any unproven assumptions. Prior speedup claims required the assumption that there is no better classical algorithm against which to benchmark the quantum algorithm. Here, the team led by Lidar used an algorithm they modified for the quantum computer to solve a variation of “Simon’s problem,” an early example of quantum algorithms that can, in theory, solve a task exponentially faster than any classical counterpart, unconditionally.


4 things that make an AI strategy work in the short and long term

Most AI gains came from embedding tools like Microsoft Copilot, GitHub Copilot, and OpenAI APIs into existing workflows. Aviad Almagor, VP of technology innovation at tech company Trimble, also notes that more than 90% of Trimble engineers use Github Copilot. The ROI, he says, is evident in shorter development cycles, and reduced friction in HR and customer service. Moreover, Trimble has introduced AI into their transportation management system, where AI agents optimize freight procurement by dynamically matching shippers and carriers. ... While analysts often lament the difficulty of showing short-term ROI for AI projects, these four organizations disagree — at least in part. Their secret: flexible thinking and diverse metrics. They view ROI not only as dollars saved or earned, but also as time saved, satisfaction increased, and strategic flexibility gained. London says that Upwave listens for customer signals like positive feedback, contract renewals, and increased engagement with AI-generated content. Given the low cost of implementing prebuilt AI models, even modest wins yield high returns. For example, if a customer cites an AI-generated feature as a reason to renew or expand their contract, that’s taken as a strong ROI indicator. Trimble uses lifecycle metrics in engineering and operations. For instance, one customer used Trimble AI tools to reduce the time it took to perform a tunnel safety analysis from 30 minutes to just three.


How IT Leaders Can Rise to a CIO or Other C-level Position

For any IT professional who aspires to become a CIO, the key is to start thinking like a business leader, not just a technologist, says Antony Marceles, a technology consultant and founder of software staffing firm Pumex. "This means taking every opportunity to understand the why behind the technology, how it impacts revenue, operations, and customer experience," he explained in an email. The most successful tech leaders aren't necessarily great technical experts, but they possess the ability to translate tech speak into business strategy, Marceles says, adding that "Volunteering for cross-functional projects and asking to sit in on executive discussions can give you that perspective." ... CIOs rarely have solo success stories; they're built up by the teams around them, Marceles says. "Colleagues can support a future CIO by giving honest feedback, nominating them for opportunities, and looping them into strategic conversations." Networking also plays a pivotal role in career advancement, not just for exposure, but for learning how other organizations approach IT leadership, he adds. Don't underestimate the power of having an executive sponsor, someone who can speak to your capabilities when you’re not there to speak for yourself, Eidem says. "The combination of delivering value and having someone champion that value -- that's what creates real upward momentum."


SLMs vs. LLMs: Efficiency and adaptability take centre stage

SLMs are becoming central to Agentic AI systems due to their inherent efficiency and adaptability. Agentic AI systems typically involve multiple autonomous agents that collaborate on complex, multi-step tasks and interact with environments. Fine-tuning methods like Reinforcement Learning (RL) effectively imbue SLMs with task-specific knowledge and external tool-use capabilities, which are crucial for agentic operations. This enables SLMs to be efficiently deployed for real-time interactions and adaptive workflow automation, overcoming the prohibitive costs and latency often associated with larger models in agentic contexts. ... Operating entirely on-premises ensures that decisions are made instantly at the data source, eliminating network delays and safeguarding sensitive information. This enables timely interpretation of equipment alerts, detection of inventory issues, and real-time workflow adjustments, supporting faster and more secure enterprise operations. SLMs also enable real-time reasoning and decision-making through advanced fine-tuning, especially Reinforcement Learning. RL allows SLMs to learn from verifiable rewards, teaching them to reason through complex problems, choose optimal paths, and effectively use external tools. 


Quantum’s quandary: racing toward reality or stuck in hyperbole?

One important reason is for researchers to demonstrate their advances and show that they are adding value. Quantum computing research requires significant expenditure, and the return on investment will be substantial if a quantum computer can solve problems previously deemed unsolvable. However, this return is not assured, nor is the timeframe for when a useful quantum computer might be achievable. To continue to receive funding and backing for what ultimately is a gamble, researchers need to show progress — to their bosses, investors, and stakeholders. ... As soon as such announcements are made, scientists and researchers scrutinize them for weaknesses and hyperbole. The benchmarks used for these tests are subject to immense debate, with many critics arguing that the computations are not practical problems or that success in one problem does not imply broader applicability. In Microsoft’s case, a lack of peer-reviewed data means there is uncertainty about whether the Majorana particle even exists beyond theory. The scientific method encourages debate and repetition, with the aim of reaching a consensus on what is true. However, in quantum computing, marketing hype and the need to demonstrate advancement take priority over the verification of claims, making it difficult to place these announcements in the context of the bigger picture.


Ethical AI for Product Owners and Product Managers

As the product and customer information steward, the PO/PM must lead the process of protecting sensitive data. The Product Backlog often contains confidential customer feedback, competitive analysis, and strategic plans that cannot be exposed. This guardrail requires establishing clear protocols for what data can be shared with AI tools. A practical first step is to lead the team in a data classification exercise, categorizing information as Public, Internal, or Restricted. Any data classified for internal use, such as direct customer quotes, must be anonymized before being used in an AI prompt. ... AI is proficient at generating text but possesses no real-world experience, empathy, or strategic insight. This guardrail involves proactively defining the unique, high-value work that AI can assist but never replace. Product leaders should clearly delineate between AI-optimal tasks, creating first drafts of technical user stories, summarizing feedback themes, or checking for consistency across Product Backlog items and PO/PM-essential areas. These human-centric responsibilities include building genuine empathy through stakeholder interviews, making difficult strategic prioritization trade-offs, negotiating scope, resolving conflicting stakeholder needs, and communicating the product vision. By modeling this partnership and using AI as an assistant to prepare for strategic work, the PO/PM reinforces that their core value lies in strategy, relationships, and empathy.


Sharded vs. Distributed: The Math Behind Resilience and High Availability

In probability theory, independent events are events whose outcomes do not affect each other. For example, when throwing four dice, the number displayed on each dice is independent of the other three dice. Similarly, the availability of each server in a six-node application-sharded cluster is independent of the others. This means that each server has an individual probability of being available or unavailable, and the failure of one server is not affected by the failure or otherwise of other servers in the cluster. In reality, there may be shared resources or shared infrastructure that links the availability of one server to another. In mathematical terms, this means that the events are dependent. However, we consider the probability of these types of failures to be low, and therefore, we do not take them into account in this analysis.  ... Traditional architectures are limited by single-node failure risk. Application-level sharding compounds this problem because if any node goes down, its shard and therefore the total system becomes unavailable. In contrast, distributed databases with quorum-based consensus (like YugabyteDB) provide fault tolerance and scalability, enabling higher resilience and improved availability.


How FinTechs are turning GRC into a strategic enabler

The misconception that risk management and innovation exist in tension is one that modern FinTechs must move beyond. At its core, cybersecurity – when thoughtfully integrated – serves not as a brake but as an enabler of innovation. The key is to design governance structures that are both intelligent and adaptive (and resilient in itself). The foundation lies in aligning cybersecurity risk management with the broader business objective: enablement. This means integrating security thinking early in the innovation cycle, using standardized interfaces, expectations, and frameworks that don’t obstruct, but rather channel innovation safely. For instance, when risk statements are defined consistently across teams, decisions can be made faster and with greater confidence. Critically, it starts with the threat model. A well-defined, enterprise-level threat model is the compass that guides risk assessments and controls where they matter most. Yet many companies still operate without a clear articulation of their own threat landscape, leaving their enterprise risk strategies untethered from reality. Without this grounding, risk management becomes either overly cautious or blindly permissive, or a bit of both. We place a strong emphasis on bridging the traditional silos between GRC, IT Security, Red Teaming, and Operational teams.

Daily Tech Digest - May 26, 2025


Quote for the day:

“Don't blow off another's candle for it won't make yours shine brighter.” -- Jaachynma N.E. Agu



Beyond single-model AI: How architectural design drives reliable multi-agent orchestration

It’s no longer just about building a single, super-smart model. The real power, and the exciting frontier, lies in getting multiple specialized AI agents to work together. Think of them as a team of expert colleagues, each with their own skills — one analyzes data, another interacts with customers, a third manages logistics, and so on. Getting this team to collaborate seamlessly, as envisioned by various industry discussions and enabled by modern platforms, is where the magic happens. But let’s be real: Coordinating a bunch of independent, sometimes quirky, AI agents is hard. It’s not just building cool individual agents; it’s the messy middle bit — the orchestration — that can make or break the system. When you have agents that are relying on each other, acting asynchronously and potentially failing independently, you’re not just building software; you’re conducting a complex orchestra. This is where solid architectural blueprints come in. We need patterns designed for reliability and scale right from the start. ... For agents to collaborate effectively, they often need a shared view of the world, or at least the parts relevant to their task. This could be the current status of a customer order, a shared knowledge base of product information or the collective progress towards a goal. Keeping this “collective brain” consistent and accessible across distributed agents is tough. 


Unstructured Data Management Tips

"Unlike traditional databases, which define the schema -- the data's structure -- before it's stored, schema-on-read defers this process until the data is actually read or queried," says Kamal Hathi, senior vice president and general manager of machine-generated data monitoring and analysis software firm at Splunk, a Cisco company. This approach is particularly effective for unstructured and semi-structured data, where the schema is not predefined or rigid, Hathi says. "Traditional databases require a predefined schema, which makes working with unstructured data challenging and less flexible." ... Manage unstructured data by integrating it with structured data in a cloud environment using metadata tagging and AI-driven classifications, suggests Cam Ogden, a senior vice president at data integrity firm Precisely. "Traditionally, structured data -- like customer databases or financial records -- reside in well-organized systems such as relational databases or data warehouses," he says. However, to fully leverage all of their data, organizations need to break down the silos that separate structured data from other forms of data, including unstructured data such as text, images, or log files. This is where the cloud comes into play. Integrating structured and unstructured data in the cloud allows for more comprehensive analytics, enabling organizations to extract deeper insights from previously siloed information, Ogden says. 


Why IT Certifications Are Now the Hottest Currency in Tech

The reasons are manifold. Inflation has eroded buying power, traditional merit-based raises have declined, bonuses are scarcer and 2024 saw a sharp uptick in layoffs - particularly targeting middle management and older professionals. Unlike the "Great Resignation" of 2021, professionals today are staying put - not from loyalty but from caution, and upskilling is the key to ensure their longevity. Faced with a precarious job market and declining benefits, many IT employees are opting for stability and doubling down on internal mobility. According to the Pearson VUE's 2025 Value of IT Certification Candidate Report, more than 80% of the respondents who hold at least one certification said it enhanced their ability to innovate and 70% said they experienced greater autonomy at workplace. Even in regions where pay bumps are smaller, the career mobility afforded by certifications is prevalent. In India, for instance, CloudThat's IT Certifications and Salary Survey found that Microsoft-certified professionals earn an average entry salary of $10,900, with 60% of certified workers reporting pay hikes. "The increased value in certifications underscore their critical role in equipping professionals with the skills needed to excel and advance in their roles. As the industry continues to grow, certifications are becoming essential to stand out and meet the demand for specialized skills," said Bhavesh Goswami, founder and CEO of CloudThat.


Speed and scalability redefine the future of modern banking

To expedite digitalisation, global policymakers are introducing regulations such as India’s Digital Banking Units (DBUs), the EU’s PSD2/PSD3 directives, and the GCC’s open finance guidelines. The growth in non-bank financial intermediaries (NBFIs), which has been both more intricate and wider in scope, in the most recent years, obliges the employing of more effective compliance frameworks and the introduction of better risk management strategies. ... Integrating banking directly into non-financial platforms such as e-commerce is on the rise. Based on a report by Grand View Research, the global Banking-as-a-Service (BaaS) market is expected to reach USD 66 billion by 2030. Retailers increasingly partner with banks for instant, personalised offers and payments via identity beacons, enhancing customer experiences through Gen AI-supported interactions. For example, real-time data analytics and machine learning models are now essential for personalised financial services. Reimagined branch visits are becoming an emerging trend, with branches shifting to high-footfall locations like malls. The store-like experience includes personalised offers and decision aids, including immediate approval for flexible loans, made possible by customer identification based on consent.


5 questions to test tech resilience and build a 90-day action plan

The convergence of AI with existing systems has brought technical debt into sharp focus. While AI, and agentic AI in particular, presents transformative opportunities, it also exposes the limitations of legacy systems and architectural decisions made in the past. It’s essential to balance the excitement of AI adoption with the pragmatic need to address underlying technical debt, as we explored in our recent research. ... While AI enthusiasm runs high, successful implementation requires careful focus on use cases that deliver tangible business value. CIOs must lead their organizations in identifying and executing AI initiatives that drive meaningful outcomes. That means defining AI programs with an holistic, end-to-end vision of how they’ll deliver value for your business. And it means taking a platform approach, as opposed to numerous isolated PoCs. ... The traditional boundaries of IT are dissolving. With technology now fundamentally driving business strategy, CIOs must lead the evolution from an IT operating model to a new business technology operating model. Recent data shows organizations that have embraced this transformation achieved 15% higher top-line performance compared to their peers, with potential for this gap to double by next year.


LlamaFirewall: Open-source framework to detect and mitigate AI centric security risks

One particularly concerning area is the use of LLMs in coding applications. “Coding agents that rely on LLM-generated code may inadvertently introduce security vulnerabilities into production systems,” Chennabasappa warned. “Misaligned multi-step reasoning can also cause agents to perform operations that stray far beyond the user’s original intent.” These types of risks are already surfacing in coding copilots and autonomous research agents, she added, and are only likely to grow as agentic systems become more common. Yet while LLMs are being embedded deeper into mission-critical workflows, the surrounding security infrastructure hasn’t kept pace. “Security infrastructure for LLM-based systems is still in its infancy,” Chennabasappa said. “So far, the industry’s focus has been mostly limited to content moderation guardrails meant to prevent chatbots from generating misinformation or abusive content.” That approach, she argued, is far too narrow. It overlooks deeper, more systemic threats like prompt injection, insecure code generation, and abuse of code interpreter capabilities. Even proprietary safety systems that hardcode rules into model inference APIs fall short, according to Chennabasappa, because they lack the transparency, auditability, and flexibility needed to secure increasingly complex AI applications.


Navigating Double and Triple Extortion Tactics

In double extortion attacks, a second layer is added: attackers, having gained access to the system, exfiltrate sensitive and valuable data. This not only deepens the victim’s vulnerability but also increases pressure, as attackers now hold both encrypted files and stolen information, which they can use as leverage for further demands. The threat of double extortion becomes more severe as it combines operational disruption (due to encrypted data and downtime) with the risk of public exposure. Organizations unable to access their data face halted services, financial loss, and reputational damage. ... Triple extortion expands upon traditional and double extortion ransomware tactics by introducing a third layer of pressure. The attack begins with data encryption and exfiltration, similar to the double extortion model—locking the victim out of their data while simultaneously stealing sensitive information. This stolen data gives attackers multiple avenues to exploit the victim, who is left with no control over its fate. The third stage involves third-party extortion. After collecting data from the primary victim, attackers identify and target affiliated parties, such as partners, clients, and stakeholders, whose information was also compromised. 


The 7 unwritten rules of leading through crisis

Your first move shouldn’t be panic-fixing everything in silence, Young says. “You need to let people know what’s going on, including your team, your leadership, and sometimes even your customers.” Keeping everyone in the loop calms nerves and builds trust. Silence makes everything worse, Young warns. ... Confusion is contagious. “Providing clarity about what’s known, what matters, and what you’re aiming for, stabilizes people and systems,” says Leila Rao, a workplace and executive coaching consultant. “It sets the tone for proactivity instead of reactivity.” Simply treating symptoms will make the problem worse, Rao warns. “Misinformation spreads, trust erodes, and well-intentioned responses become counterproductive.” Crisis is complexity on steroids, Rao observes. “When we center people, welcome multiple perspectives, and make space for emergence, we move from crisis management to collective learning.” ... You can’t hide from a crisis, and attempting to do so only compounds the damage, Hasmukh warns. “Clear visibility into what happened allows you to respond effectively and maintain stakeholder trust during challenging times.” Organizations that delay acknowledging issues inevitably face greater scrutiny and damage than those that address situations head-on.


BYOD like it’s 2025

The data is clear that there can be significant gains in productivity attached to BYOD. Samsung estimates that workers using their own devices can gain about an hour of productive worktime per day and Cybersecurity Insiders says that 68% of businesses see some degree of productivity increases. Although the gains are significant, personal devices can also distract workers more than company-owned devices, with personal notifications, social media accounts, news, and games being the major time-sink culprits. This has the potential to be a real issue, as these apps can become addictive and their use compulsive. ... One challenge for BYOD has always been user support and education. With two generations of digital natives now comprsing more than half the workforce, support and education needs have changed. Both millennials and Gen Z have grown up with the internet and mobile devices, which makes them more comfortable making technology decisions and troubleshooting problems than baby boomers and Gen X. This doesn’t mean that they don’t need tech support, but they do tend to need less hand-holding and don’t instinctively reach for the phone to access that support. Thus, there’s an ongoing shift to self-support resources and other, less time-intensive, models with text chat being the most common — be it with a person or a bot.


You have seen the warnings: your next IT outage could be worse

In-band management uses the same data path as production traffic to manage the customer environment, while logically isolating management traffic from production data. Although this approach can be more cost-effective, it introduces certain risks. If a problem occurs with the production network, it can also disrupt management access to the infrastructure, a situation referred to as “fate sharing.” In these cases, the only viable solution may be to send an engineer onsite to diagnose and resolve the issue. This can result in significant costs and delays, potentially impacting the customer’s business operations. Out-of-band management, on the other hand, uses a separate network to provide independent access for managing the infrastructure, completely isolating management traffic from the production network. This separation is crucial during major disruptions like provider outages or security breaches, as it guarantees continuous access to network devices and servers, even if the primary production network is down or compromised. ... A secure connection links this cloud infrastructure to the customer’s on-premises IT setup, usually through a dedicated private network connection, SD-WAN, or an IPSEC VPN. This connection typically terminates at an on-premises router or firewall, safeguarding access to the out-of-band management network.