Daily Tech Digest - November 28, 2025


Quote for the day:

"Whenever you find yourself on the side of the majority, it is time to pause and reflect." -- Mark Twain



Security researchers caution app developers about risks in using Google Antigravity

“In Antigravity,” Mindgard argues, “’trust’ is effectively the entry point to the product rather than a conferral of privileges.” The problem, it pointed out, is that a compromised workspace becomes a long-term backdoor into every new session. “Even after a complete uninstall and re-install of Antigravity,” says Mindgard, “the backdoor remains in effect. Because Antigravity’s core intended design requires trusted workspace access, the vulnerability translates into cross-workspace risk, meaning one tainted workspace can impact all subsequent usage of Antigravity regardless of trust settings.” For anyone responsible for AI cybersecurity, says Mindguard, this highlights the need to treat AI development environments as sensitive infrastructure, and to closely control what content, files, and configurations are allowed into them. ... Swanda recommends that app development teams building AI agents with tool-calling: assume all external content is adversarial. Use strong input and output guardrails, including tool calling; Strip any special syntax before processing; implement tool execution safeguards. Require explicit user approval for high-risk operations, especially those triggered after handling untrusted content or other dangerous tool combinations; not rely on prompts for security. System prompts, for example, can be extracted and used by an attacker to influence their attack strategy. 


How AI Is Rewriting The Rules Of Work, Leadership, And Human Potential

When a CEO tells his team, "AI is coming for your jobs, even mine," you pay attention. It is rare to hear that level of blunt honesty from any leader, let alone the head of one of the world's largest freelance platforms. Yet this is exactly how Fiverr co-founder and CEO Micha Kaufman has chosen to guide his company through the most significant technological shift of our lifetimes. His blunt assessment: AI is coming for everyone's jobs, and the only response is to get faster, more curious, and fundamentally better at being human. ... We're applying AI to existing workflows and platforms, seeing improvements, but not yet experiencing the fundamental restructuring that's coming. "It is mostly replacing the things we used to do as human beings, acting as robots," Kaufman observes. The repetitive tasks, the research gathering, the document summarizing, these elements where humans brought judgment but little humanity are being automated first. ... It's not enough to use the obvious AI tools in obvious ways. The real value emerges from those who push boundaries, combine systems creatively, or bring exceptional judgment to AI-assisted workflows. Kaufman points to viral videos created with advanced AI tools, noting that their quality stems not from the AI itself but from the operator's genius, experience, creativity, and taste developed over years.


How ‘digital twins’ could help prevent cyber-attacks on the food industry

A digital twin is a virtual replica of any product, process, or service, capturing its state, characteristics, and connections with other systems throughout its life cycle. The digital twin will include the computer system used by the company. It can help because conventional defences are increasingly out of step with cyber-attacks. Monitoring tools tend to detect anomalies after damage occurs. Complex computer systems can often obscure the origins of breaches. A digital twin creates a bridge between the physical and digital worlds. It allows organisations to simulate real-time events, predict what might happen next, and safely test potential responses. It can also help analyse what happened after a cyber-attack to help companies prepare for future incidents. ... A digital twin might be able to avert disaster under this scenario. By combining operational data such as temperature, humidity, or the speed air of flow with internal computing system data or intrusion attempts, digital twins offer a unified view of both system performance and cybersecurity. They enable organisations to simulate cyber-attacks or equipment failures in a safe, controlled digital environment, revealing vulnerabilities before attackers can exploit them. A digital twin can also detect abnormal temperature patterns, monitor the system for malicious activity, and perform analysis after a cyber-attack to identify the causes.


Why password management defines PCI DSS success

When you dig into real incidents involving payment data, a surprising number come down to poor password hygiene. PCI DSS v4.0 raised the bar for authentication, and the responsibility sits with security leaders to turn those requirements into workable daily habits for users and admins. ... Requirement 8 asks organizations to verify the identity of every user with strong authentication, make sure passwords and passphrases meet defined strength rules, prevent credential reuse, limit attempts, and store credentials securely. Passwords need to be at least 12 characters long, or at least 8 characters when a system cannot support longer strings. These rules line up with guidance from NIST SP 800 63B, which recommends longer passphrases, resistance against common word lists and hashing methods that protect stored secrets. ... PCI DSS requires that access be traceable to an individual and that shared accounts be minimized and controlled. When passwords live across multiple channels, it becomes nearly impossible to show auditors reliable evidence of access history. Even if the team is trying hard, the workflow itself creates gaps that no policy document can fix. ... Some CISOs view password managers as convenience tools. PCI DSS v4.0 shows that they are closer to compliance tools because they make it possible to enforce identity controls across an organization.



AI fluency in the enterprise: Still a ‘horseless carriage’

Companies are tossing AI agents onto existing processes, but a transformative change — where AI is the boss — is still far away. That was the view of IT leaders at this year’s Microsoft Ignite conference who’ve been putting AI agents to work, mostly with legacy processes. The IT leaders discussed their efforts during a conference panel at the event earlier this month. “We’re probably living in some version of the horseless carriage — we haven’t got to the car yet,” said John Whittaker, director of AI platform and products at accounting and consulting firm EY. ... Pfizer is very process-centric, he said, stressing that the goal is not to reinvent processes right out of the gate. The company is analyzing how AI works for them, gaining confidence in the technology before reorganizing processes within the AI lens. “Where we’re definitely heading … is thinking about, ‘I’ve solved this process, I’ve been following exactly the way it exists today. Now let’s blow it up and reimagine it…’ — and that’s exciting,” he said. ... Lumen is now looking at where it wants the business to be in 36 months and linking it to AI agents and AI-native plans. “We’re … working back from that and ensuring that we have the right set of tools, the right set of training, and the right set of agents in order to enable that,” he said. Every new Lumen employee in Alexander’s connected ecosystem group gets a Copilot license. The technology has helped speed up the process of understanding acronyms and historical trends within the company.


Creating Impactful Software Teams That Continuously Improve

When you are a person who prefers your job to be strictly defined, with clear boundaries, then you feel supported instead of stifled by a boss who checks in on you regularly. In the same culture, you will feel relaxed, happy, and content, which will in turn allow you to bring your best to your job and deliver to your strengths, Žabkar Nordberg said. You do not want to have employees who will be extensions of yourself, Žabkar Nordberg said. Instead, you want people who will bring their own thoughts, their own solutions, and in many ways be different and better than yourself. ... Provide guidance, step away, and let people have autonomy within those constraints. You might say something like "I would like you to focus on improving our customer retention. Be aware that legal regulations require all steps in our current onboarding journey to be present, but we have flexibility in how we execute them as the user experience is not prescribed". This gives people guidance and focuses them, but still gives them the autonomy to bring their own experiences and find their own solutions. ... We want people to show initiative and proactively bring their own thoughts, improvements, and worries. Clear communication and an understanding of how people work will help them do that, Žabkar Nordberg said. Psychological safety underlines trust, autonomy, and communication; it is required for them to work effectively, he concluded.


Empathetic policy engineering: The secret to better security behavior and awareness

Insecure behavior is often blamed on users, when the problem often lies in the measure itself. In IT security research, the focus is often on individual user behavior — for example, on whether secure behavior depends on personality traits. The question of how well security measures actually fit the reality of work — that is, how likely they are to be accepted in everyday practice — is neglected. For every threat, there are usually several available security measures. But differences in effort, acceptance, compatibility, or complexity are often not taken into account in practice. Instead, security or IT departments often make decisions based solely on technical aspects. ... Safety measures and guidelines are often communicated in a way that doesn’t resonate with users’ work reality because they don’t aim to engage employees and motivate them: for example, through instructions, standard online training, or overly playful formats like comics that employees don’t take seriously. ... The limited success of many security measures is not solely due to the users — often it’s unrealistic requirements, a lack of involvement, and inadequate communication. For security leaders, this means: Instead of relying on education and sanctions, a strategic paradigm shift is needed. They should become a kind of empathetic policy architect whose security strategy not only works technically but also resonates on a human level.


Agentic AI is not ‘more AI’—it’s a new way of running the enterprise

Agentic AI marks a shift from simply predicting outcomes or offering recommendations to systems that can plan tasks, take actions and learn from the results within defined guardrails. In practical terms, this means moving beyond isolated, single-task copilots towards coordinated “swarms” of agents that continually monitor signals, trigger workflows across systems, negotiate constraints and complete loops with measurable outcomes. ... A major barrier is trust and control. Leaders remain cautious about allowing software to take autonomous actions. Graduated autonomy provides a path forward: beginning with assistive tools, moving to supervised autonomy with reversible actions and eventually deploying narrow, fully autonomous loops when KPIs and rollback mechanisms have been validated. Lack of clarity on value is another obstacle. Impressive demonstrations do not constitute a strategy. Organisations should use a jobs-to-be-done perspective and tie each agent to a specific financial or risk objective, such as days-sales-outstanding, mean time to resolution, inventory turns or claims leakage. Analysts have warned that many agentic initiatives will be cancelled if value remains vague, so clear scorecards and time-boxed proofs of value are essential. Data readiness is a further challenge. Weak lineage, uncertain ownership and inconsistent quality stop AI scaling efforts in their tracks.


6 strategies for CIOs to effectively manage shadow AI

“Be clear which tools and platforms are approved and which ones aren’t,” he says. “Also be clear which scenarios and use cases are approved versus not, and how employees are allowed to work with company data and information when using AI like, for example, one-time upload as opposed to cut-and-paste or deeper integration.” ... “The most important thing is creating a culture where employees feel comfortable sharing what they use rather than hiding it,” says Fisher. His team combines quarterly surveys with a self-service registry where employees log the AI tools they use. IT then validates those entries through network scans and API monitoring. ... “Effective inventory management requires moving beyond periodic audits to continuous, automated visibility across the entire data ecosystem,” he says, adding that good governance policies ensure all AI agents, whether approved or built into other tools, send their data in and out through one central platform. ... “Risk tolerance should be grounded in business value and regulatory obligation,” says Morris. Like Fisher, Morris recommends classifying AI use into clear categories, what’s permitted, what needs approval, and what’s prohibited, and communicating that framework through leadership briefings, onboarding, and internal portals. ... Transparency is the key to managing shadow AI well. Employees need to know what’s being monitored and why.


It’s Time to Rethink Access Control for Modern Development Environments

When faced with the time-consuming complexity of managing granular permissions across dozens of development tools, most VPs of Engineering and CTOs opt for the path of least resistance, granting broad administrative privileges to entire engineering teams. It’s understandable from a productivity standpoint; nobody wants to be a bottleneck when a critical release is imminent, or explain to the CEO why they missed a market window because a developer couldn’t access a repository. However, when everyone has admin privileges, attackers who gain access to just one set of credentials can do tremendous damage. They gain not just access to sensitive code and data, but the ability to manipulate build processes, insert malicious code, or establish persistent backdoors. This problem becomes even more dangerous when combined with the prevalence of shadow IT, non-human identities, and contractor relationships operating outside your security perimeter. ... The answer to stronger security that doesn’t hinder developer productivity lies in implementing just-in-time permissioning within the SDLC, a concept successfully adopted from cloud infrastructure management that can transform how we handle development access controls. The approach is straightforward: instead of granting permanent administrative access to everyone, take 90 days to observe what developers actually need to do their jobs, then right-size their permissions accordingly. 

Daily Tech Digest - November 27, 2025


Quote for the day:

“Let no feeling of discouragement prey upon you, and in the end you are sure to succeed.” -- Abraham Lincoln


The identity mess your customers feel before you do

Over half of organizations rely on developers who are not specialists in authentication. These teams juggle identity work alongside core product duties, which leads to slow progress, inconsistent implementation, and recurring defects. Decision makers admit that they underestimate the time developers spend on authentication. In many organizations, identity work drops down the backlog until a breach, an outage, or lost revenue forces renewed attention. Context switching is common. Developers move between authentication, compliance requirements, and product enhancements, which increases the likelihood of mistakes and slows delivery. ... Authentication issues undermine revenue as well as security. Organizations report that user dropoff during login, delays in engineering delivery, and abandoned transactions stem from outdated authentication flows. These issues rarely show up as a single budget line, but they accumulate into lost revenue and higher operating costs. ... Agentic AI is set to make customer identity more complicated. Automated activity will increase on every front, from routine actions taken on behalf of legitimate users to large scale attacks that target login and account creation flows. Security teams will face more traffic to evaluate and less certainty about what reflects user intent. Attackers will use AI to run high volumes of account takeover attempts and to create synthetic identities that blend in with normal behavior.


Bank of America's Blueprint for AI-Driven Banking

Over the past decade, Bank of America has invested more than $100 billion in technology. "Technology is a strategic enabler that now allows AI and automation to expand across every part of the organization, stretching from consumer services to capital markets," Bank of America CEO Brian Moynihan said. This focus on scale also shapes how the bank approaches gen AI. ... The bank's decade-long AI effort now supports 58 million interactions each month across customer support, transactions and informational requests. Erica has also become an internal platform. Erica for Employees has "reduced calls into the IT service desk by 50%," Bank of America said. This internal role matters because it shows how a consumer-grade AI system can evolve into an enterprise asset - one that assists with IT queries, operational troubleshooting and employee guidance across large distributed teams. ... The bank's CashPro Data Intelligence suite includes AI-driven search, forecasting and insights, and recently won the "Best Innovation in AI" award. These capabilities bring predictive analytics directly into the operational core of corporate treasury teams. By analyzing behavioral cash flows, transaction histories, seasonality and market data, the platform can generate forward-looking liquidity projections and actionable insights. For enterprises, this means fewer manual reconciliation cycles, improved liquidity planning and faster financial decision-making. 


Cybersecurity Is Now a Core Business Discipline

Cybersecurity is now a core business discipline, not an IT specialty. When a household name like Marks & Spencer can take a $400 million hit to trading profits after a major cyber incident, we’ve moved beyond “technology risk” into enterprise resilience. I often say the bad actors only need to get lucky once; defenders must be effective 24/7. That asymmetry won’t vanish. The job of leadership is to run with it; to accept the pace of the threat and build organizations that can withstand, respond, and keep moving. ... If bad actors only need to be lucky once, then your business must be designed to fail safely. That means strong identity controls, multi-factor authentication everywhere it makes sense, segmentation that limits lateral movement, and backups that are both tested and recoverable. None of this is glamorous. All of it is decisive. I’ve yet to meet a breached organization that regretted investing in the basics. Engineer for better human decisions. Traditional awareness training has diminishing returns if it’s divorced from real work. Replace generic modules with just-in-time prompts in the tools people actually use. Add controlled friction to high-risk workflows: payment changes, supplier onboarding, privileged access approvals. Normalize “pause and verify” by making it easy and expected. Culture is created by what gets rewarded and what gets made simple.


Building Your Work Digital Twin Starts With The Video You Already Have

This concept is far from new. We've already seen AI-generated assistants, virtual trainers and automated knowledge bases. But what separates a true digital twin from a chatbot or a script is the ability to capture how we communicate and not just what we say. That's where video—where tone, style, facial expression and more are clearly displayed—becomes invaluable. ... The idea of creating another you that actually delivers requires a concerted effort from both individuals and organizations. But it starts with centralizing and organizing the video content that already exists across departments, including training sessions, customer interactions, leadership updates and team calls. Assembling the video is just the start, as curating what matters is key. Prioritize videos that demonstrate clarity, professionalism and authenticity. ... As AI becomes more prevalent, authenticity, not automation, is emerging as a competitive differentiator. Customers, partners and employees still crave the sense of a real, trustworthy voice, and human digital twins give organizations a way to scale that presence. These are not fabricated influencers or AI puppets but extensions of real people, grounded in consent and context. Of course, this shift also demands ethical guardrails: clear usage boundaries, transparency about when digital twins are speaking and secure storage of identity data. When done responsibly, it can be a powerful evolution of human-machine collaboration that keeps people at the center.


AI adoption blueprint: Driving lasting enterprise value in India

The challenge that employees face towards AI adoption in Indian enterprises is not rooted in capability gaps or lack of enthusiasm, but stems from insufficient contextual understanding. Organisational experiences reveal that mandating users to move between disparate systems enables them to craft their own prompts or proactively seek AI assistance without much experience, which often results in digital friction, underutilisation or complete abandonment. These challenges intensify across diverse workforces spanning multiple languages and regions. ... Building workforce confidence around AI remains a key hurdle given the uneven distribution of AI fluency across teams—even within digitally advanced Indian IT ecosystems. Overcoming this requires embedding just-in-time learning resources tailored to user roles and scenarios directly inside the applications employees use daily. Offering interactive onboarding, scenario-based microlearning, and guidance in multiple languages not only meets users where they are but respects the linguistic and cultural diversity that characterises India’s workplaces. This approach helps alleviate hesitation, foster trust, and accelerate AI fluency across complex organisations. ... Treating adoption as a continuous process that evolves alongside workflows, user requirements, and business priorities ensures AI continues to deliver value beyond launch phases, achieving sustainable scale. 


A CIO’s 5-point checklist to drive positive AI ROI

“Start by assigning business ownership,” advises Srivastava. “Every AI use case needs an accountable leader with a target tied to objectives and key results.” He recommends standing up a cross-functional PMO to define lighthouse use cases, set success targets, enforce guardrails, and regularly communicate progress. Still, even with leadership in place, many employees will need hands-on guidance to apply AI in their daily work. ... CIOs should also view talent as a cornerstone of any AI strategy, adds CMIT’s Lopez. “By investing in people through training, communication, and new specialist roles, CIOs can be assured that employees will embrace AI tools and drive success.” He adds that internal hackathons and training sessions often yield noticeable boosts in skills and confidence. Upskilling, for instance, should meet employees where they are, so Asana’s Srivastava recommends tiered paths: all staff need basic prompt literacy and safety training, while power users require deeper workflow design and agent-building knowledge. ... The resounding point is to set metrics early on, and not fall into the anti-patterns of not tracking signals or value gained. “Measurement is often bolted on late, so leaders can’t prove value or decide what to scale,” says Srivastava. “The remedy is to begin with a specific mission metric, baseline it, and embed AI directly in the flow of work so people can focus on higher-value judgment.”


The coming storm for satellites

Although an uncommon occurrence, the list of dangers caused by space weather is daunting. In addition to atmospheric drag piercing LEO space, Earth’s radiation belt can be changed by the injection of high-energy electrons, plunging geostationary satellites at high elevations into deep-space conditions, unshielding them from the Earth’s magnetosphere. Even inside the relative protection of the planet’s orbits, radiation can damage electronics, charged particles from the sun can electrify the body of a spacecraft, potentially powering a discharge between two differently charged sections, and solar cells can be degraded faster during solar storms. A single space weather event can cause the same wear and tear as an entire year of normal operation. ... Nonetheless, the concern is “that a big solar event could disable a large number of satellites and cause a major increase in the collision risk, particularly in the very busy LEO orbit domain,” Machin says. “We need to ensure that such an event does not risk our ability to continue using space in the future. “We need to always plan for space sustainability.” Machin alludes to the danger of Kesseler Syndrome, a scenario in which debris density in low-Earth orbit becomes so great that the destruction of satellites and newly launched vehicles becomes probable, thereby multiplying debris density, resulting in unusable orbits, and trapping the human race on Earth for thousands of years.


How intelligent systems are evolving: Rob Green, CDO at Insight Enterprises

We operate on a zero-trust model and corresponding policies. An additional advantage of being a major Microsoft partner is that we received early access to ChatGPT, which we deployed internally as “InsightGPT.” We launched it early to develop AI capabilities within our services, solutions, and IT teams. We recognised the need for clear guidelines around AI usage and deployment. Our AI usage policies, first introduced two years ago, ensure employees understand how to implement and experiment with AI responsibly. These policies are continuously updated, our most recent revision was released three weeks ago. Regulatory and compliance requirements vary by region, and our policies are adapted accordingly. ... First, we ensure awareness and education across the organization. Not everyone needs to be an AI developer, but we want employees to be fluent with AI tools and understand how to use them productively. We recently launched the AI Flight Academy, which includes five proficiency levels. A large portion of employees is expected to reach advanced levels. Our mission has evolved, we aim to be a leading AI-first solutions integrator. To support this, my team is building platforms that enable agentic capabilities across shared functions such as finance, HR, IT, warehouse operations, and marketing.


Agentic HR: from static roles to growth roles with AI co pilots

When people cannot see progress, they stop stretching. In many firms the only formal feedback loop is the annual review. That is too slow for real learning and it misses the small wins that power engagement. The alternative is to treat every role as a platform for growth. You design work so that capability increases by doing the work itself. This is where agentic HR comes in. ... Co pilots should live where work already happens. That means chat, documents, code, tickets, and task boards. The system watches patterns, respects privacy settings, and offers context aware prompts. ... People facing AI must earn trust. That starts with shared governance. HR and technology leaders should set rules for data minimisation, explainability, and bias monitoring. They should also be clear on when AI recommends and when a human decides. Two reference points help. The EU AI Act introduces a risk based approach with specific duties for higher risk use cases and transparency expectations for generative systems. This shapes how enterprises should document and oversee AI that touches employees. The NIST AI Risk Management Framework provides practical guidance on mapping risks, measuring impacts, and governing models over time. It is vendor neutral and it emphasises continuous monitoring rather than one time checks. Enterprises can also look to the new ISO and IEC standard for AI management systems.


The Three Keys to AI in Banking: Compliance, Explainability and Control

When a new technology like AI enters an industry, the goals are simple: Save money, save time, and ideally, increase revenue. According to a 2023 report from McKinsey, AI has the potential to reduce operating costs in banking by 20-30% by automating manual processes, cutting down on errors and saving time. ... Finance is one of the most heavily regulated industries, and rightfully so. When you’re managing transactions and people’s hard-earned money, there is little room for error. As banks adopt AI, they need full disclosure for what is happening every step of the way. ... To close that gap, financial institutions need to prioritize not only technical accuracy but also interpretability. Investing in training, cross-functional collaboration and governance frameworks that support explainable AI will be key to long-term success. The banks that succeed will be the ones that use AI systems their regulators can audit, their teams can trust, and their customers can understand. ... Trust is the currency of this industry, which is why adoption looks different here than it does in consumer tech. Rather than rushing into full-scale adoption, many banks are starting with pilot programs that have tightly scoped risk exposure. ... Done right, AI can help institutions expand credit more inclusively, flag risks earlier and give underwriters clearer insights without sacrificing compliance.

Daily Tech Digest - November 26, 2025


Quote for the day:

“There is only one thing that makes a dream impossible to achieve: the fear of failure.” -- Paulo Coelho



7 signs your cybersecurity framework needs rebuilding

The biggest mistake, Pearlson says, is failing to recognize that the current plan is out of date or simply not working. Breaches happen, but that doesn’t always mean your cyber framework needs rebuilding. It does, however, indicate that the framework needs to be rethought and redesigned. ... “If your framework hasn’t kept pace with evolving threats or business needs, it’s time for a rebuild.” Cyber threats are always evolving, so staying proactive with regular reviews and fostering a culture of cybersecurity awareness will help catch issues before they become crises, Bucher says. ... “The cybersecurity landscape has evolved rapidly, especially with the rise of generative AI — your framework should reflect these shifts.” McLeod recommends a complete a biannual framework review combined with a cursory review during the gap years. “This helps to ensure that the framework stays aligned with evolving threats, business changes, and regulatory requirements.” Ideally, security leaders should always have their security framework in mind while maintaining a rough, running list of areas that could be improved, streamlined, or clarified, McLeod suggests. ... If an organization is stuck in a cycle of continually chasing alerts and incidents, as well as reporting events after the fact instead of performing predictive threat assessments, data analysis, and forward planning, it’s time for a change, Baiati advises. 


Your Million-Dollar IIoT Strategy is Being Sabotaged by Hundred-Dollar Radios

The ambition is clear: to create hyper-efficient, data-driven operations in a market expected to exceed $1.6 billion by 2030. Yet, a fundamental paradox lies at the heart of this transformation. While we architect complex digital twins and deploy sophisticated AI models, the foundational tools entrusted to our most valuable asset—the frontline workforce—are often decades old, disconnected, and failing at an alarming rate. ... Data shows that one in four organizations loses more than an entire day of productivity every month simply dealing with broken technology. The primary culprits are as predictable as they are preventable: nearly half of workers cite battery problems (48.4%) and physical damage (46.8%) as the most common causes of failure. ... While conversations about this crisis often focus on pay and career paths, Relay’s research reveals a more immediate, tangible cause: the daily frustration of using broken tools. 1 in 4 frontline workers already feel their equipment is second-class compared to what their corporate counterparts use, and a staggering 43% of workers saying they’d be less likely to quit if guaranteed access to modern, automatically upgraded devices. ... Beyond reliability, it’s important to address the data black hole created by legacy, disconnected tools. Every day, frontline teams generate thousands of hours of spoken communication—a rich stream of unstructured data filled with maintenance alerts, safety concerns, and process bottlenecks. 


Ask the Experts: Validate, don't just migrate

"Refactoring code is certainly a big undertaking. And if you start before you have good hygiene and governance, then you're just setting yourself up for failure. Similarly, if you haven't tagged properly, you have no way to attribute it to the project, and that becomes a cost problem." ... "If you do conclude [that migration is necessary], then you really must make sure the application is architected right. A lot of times, these workloads weren't designed for the cloud world, so you must adapt them and deliberately architect them for a cloud workload. "[To prepare a mission-critical application], it's key to look at the appropriateness, operating system [and] licenses. Sometimes, there are licenses tied to CPUs or other things that might introduce issues for you as well, so regression, latency and performance testing will be mandatory. ... "[IT leaders must also understand] the risks and costs associated with taking things into the cloud, and the pros and cons of that versus leaving it alone. Because old stuff, whether it was [procured] yesterday or five years ago, is inherently going to be vulnerable from a cybersecurity standpoint. Risk No. 2 is interoperability and compatibility, because old stuff doesn't talk to new stuff. And the third one is supportability, because it's hard to find old people to support old systems. ... "Sometimes, people have the false sense that if it's in cloud, then I'm all set. Everything is available, and everything is highly redundant. And it is, if you design [the application] with those things in mind.


Heineken CISO champions a new risk mindset to unlock innovation

Starting as an auditor and later leading a cyber defense team. It’s easy to fall into the black-and-white trap of being the function that always says “no” or speaks in cryptic tech jargon. It’s a scary world out there with so many attacks happening in every industry. The classical reaction of most security professionals is to tighten defences and impose even more rules. ... CISOs need to shift the mindset from pure compliance to asking: How does our cyber strategy support the business and its values? What calculated risks do we want the business to take? Where do we need their attention and help to embed security into the DNA of our people and our company? ... Be visible and approachable. Share the lessons that shaped you as a leader, what worked, what didn’t, and the principles that guide your decisions. I’m passionate about building diverse teams where everyone gets the same opportunities, no matter age, gender, or background. Diversity makes us stronger, and when there’s trust and openness, it sparks mentoring, coaching, and knowledge sharing. Make coaching and mentoring non-negotiable, and carve out time for it. It’s easy to push aside when you’re busy putting out security fires, but neglecting people’s growth and well-being is a big miss. Be authentic and vulnerable, walk the talk. Share the real stories, including failures and what made you stronger. Too often, people focus only on titles, certifications, and tech skills.


Data-Driven Enterprise: How Companies Turn Data into Strategic Advantage

A data-driven enterprise is not defined by the number of dashboards or analytics tools it owns. It’s defined by its ability to turn raw information into intelligent action. True data-driven organizations embed data thinking into every level of decision-making from boardroom strategy to day-to-day operations. ... A modern data architecture is not a single platform, but an interconnected ecosystem designed to balance agility, governance, and scalability. ... As organizations mature in their data journey, they are moving away from rigid, centralized models that rely on a single source of truth. While centralization once ensured control, it often created bottlenecks slowing down innovation and limiting agility.  ... We are entering an era of data agents self-learning systems capable of autonomously detecting anomalies, assessing risks, and forecasting trends in real time. These intelligent agents will soon become the invisible workforce of the enterprise, operating across domains: predicting supply chain disruptions, optimizing IT performance, personalizing customer journeys, and ensuring compliance through continuous monitoring. Their actions will reshape not only operations but also how organizations think about governance, accountability, and human oversight. For architects, this shift represents both a challenge and an extraordinary opportunity. The role is evolving from that of a data custodian focused on structure and governance to an ecosystem designer who engineers environments where data and AI can coexist, learn, and continuously create value.


10 benefits of an optimized third-party IT services portfolio

By entrusting day-to-day IT operations to trusted providers, organizations can reallocate internal resources toward higher-value initiatives such as digital transformation, automation, and product innovation. This accelerates adoption of emerging technologies, and allows internal teams to deepen business expertise, strengthen cross-functional collaboration, and focus on driving growth where it matters most. ... A well-structured third-party IT services portfolio can provide flexibility to scale up or down based on business needs. This is particularly valuable for CEOs who need to adapt to changing market conditions and seize growth opportunities. Securing talent in the market today is challenging and time consuming, so tapping into the talent pools of your strategic IT services partner base allows organizations to leverage their bench strength to fill immediate needs for talent. ... IT service providers continuously invest in advanced tech and talent development, enabling clients to benefit from cutting-edge innovations without bearing the full cost of adoption. As AI, automation, and cybersecurity evolve, providers offer the subject matter expertise and tools organizations need to stay ahead of disruption. ... With operational stability ensured through a balance of internal talent and trusted third parties, CIOs can dedicate more focus to long-term strategic initiatives that fuel growth and innovation. 


Modernizing SOCs with Agentic AI and Human-in-the-Loop: A Guide to CISOs

Traditional SOCs were not built for today’s speed and scale. Alert fatigue, manual investigations, disconnected tools, and talent shortages all contribute to the operational drag. Many security leaders are stuck in a reactive loop with no clear path to improvement. ... Legacy SOCs rely heavily on outdated technologies and rule-based detection, generating high volumes of alerts, many of which are false positives, leading to analyst burnout. Analysts are compelled to manually inspect and triage a deluge of meaningless signals, making the entire effort unsustainable. ... Before transformation can happen, one needs to understand where one stands. This can be accomplished with key benchmarking metrics for SOC performance, such as MTTD (Mean time to detect), MTTR (Mean time to respond), case closure rates, and tool effectiveness. ... Agentic AI represents the next evolution of AI-powered cybersecurity, which is modular, explainable, and autonomous. Through a coordinated system of AI agents, the Agentic SOC continuously responds and adapts to the evolving security environment in real time. It is designed to accelerate threat detection, investigation, and response by 10x, bringing speed, precision, and clarity to every function of SecOps. Agentic AI is the technology shift that changes the game. Unlike traditional automation, Agentic AI is decision-oriented, self-improving, and always operating with human-in-the-loop for oversight.


3 SOC Challenges You Need to Solve Before 2026

2026 will mark a pivotal shift in cybersecurity. Threat actors are moving from experimenting with AI to making it their primary weapon, using it to scale attacks, automate reconnaissance, and craft hyper-realistic social engineering campaigns. ... Attackers have mastered evasion. ClickFix campaigns trick employees into pasting malicious PowerShell commands by themselves. LOLBins are abused to hide malicious behavior. Multi-stage phishing hides behind QR codes, CAPTCHAs, rewritten URLs, and fake installers. Traditional sandboxes stall because they can't click "Next," solve challenges, or follow human-dependent flows. Result? Low detection rates for the exact threats exploding in 2025 and beyond. ... Thousands of daily alerts, mostly false positives. An average SOC handles 11,000 alerts daily, with only 19% worth investigating, according to the 2024 SANS SOC Survey. Tier 1 analysts drown in noise, escalating everything because they lack context. Every alert becomes a research project. Every investigation starts from zero. Burnout hits hard. Turnover doubles, morale tanks, and real threats hide in the backlog. By 2026, AI-orchestrated attacks will flood systems even faster, turning alert fatigue into a full-blown crisis. ... From a financial leadership perspective, security spending often feels like a black hole: money is spent, but risk reduction is hard to quantify. SOCs are challenged to justify investments, especially when security teams seem to be a cost center without clear profit or business-driving impact.


Digital surveillance tools are reshaping workplace privacy, GAO warns

Privacy concerns intensify when surveillance data feeds into automated systems that evaluate performance, set productivity metrics, or flag workers for potential discipline. GAO found that employers often rely on flawed benchmarks and incomplete measurements. Tools rarely capture the full range of work performed, such as research, mentoring, reading, or off-screen tasks, and frequently misinterpret normal behavior as inefficiency. When employers trust these tools “at face value,” the report notes, workers can be unfairly labeled unproductive or noncompliant despite doing their jobs well. ... Meanwhile, past federal efforts to issue guidance on reducing surveillance related harms such as transparency practices, human oversight, and safeguards against discriminatory impacts have been rescinded or paused since January by the Trump administration as agencies reassess their policy priorities. GAO also notes that existing federal privacy protections are narrow. The Electronic Communications Privacy Act restricts covert interception of communications, but it does not cover most forms of digital monitoring, such as keystroke logging, location tracking, biometric data collection, or algorithmic productivity scoring. ... The report concludes that while digital surveillance can improve safety, efficiency, and health monitoring, its benefits depend wholly on how employers use it.


How to avoid becoming an “AI-first” company with zero real AI usage

A competitor declared they’re going AI-first. Another publishes a case study about replacing support with LLMs. And a third shares a graph showing productivity gains. Within days, boardrooms everywhere start echoing the same message: “We should be doing this. Everyone else already is, and we can’t fall behind.” So the work begins. Then come the task forces, the town halls, the strategy docs and the targets. Teams are asked to contribute initiatives. But if you’ve been through this before, you know there’s often a difference between what companies announce and what they actually do. Because press releases don’t mention the pilots that stall, or the teams that quietly revert to the old way, or even the tools that get used once and abandoned. ... By then, your company’s AI-first mandate will have set into motion departmental initiatives, vendor contracts and maybe even some new hires with “AI” in their titles. The dashboards will be green, and the board deck will have a whole slide on AI. But in the quiet spaces where your actual work happens, what will have meaningfully changed? Maybe you'll be like the teams that never stopped their quiet experiments. ... That’s invisible architecture of genuine progress: Patient, and completely uninterested in performance. It doesn't make for great LinkedIn posts, and it resists grand narratives. But it transforms companies in ways that truly last. Every organization is standing at the same crossroads right now: Look like you’re innovating, or create a culture that fosters real innovation.

Daily Tech Digest - November 25, 2025


Quote for the day:

“Being kind to those who hate you isn’t weakness, it’s a different level of strength.” -- Dr. Jimson S


Invisible battles: How cybersecurity work erodes mental health in silence and what we can do about it

You’re not just solving puzzles. You’re responsible for keeping a digital fortress from collapsing under relentless siege. That kind of pressure reshapes your brain and not in a good way. ... One missed patch. One misconfigured access role. One phishing click. That’s all it takes to trigger a million-dollar disaster or worse: erode trust. You carry that weight. When something goes wrong, the guilt cuts deep. ... The business sees you as the blocker. The board sees you after the breach. And if you’re the lone cyber lead in an SME? You’re on an island, with no lifeboat. No peer to talk to, no outlet to decompress. Just mounting expectations and a growing feeling that nobody really gets what you do. ... The hero narrative still reigns; if you’re not burning out, you’re not trying hard enough. Speak up about being overwhelmed? You risk looking weak. Or worse, replaceable. So you hide it. You overcompensate. And eventually, you break, quietly. ... They expect you to know it all, yesterday. Certifications become survival badges. And with the wrong culture, they become the only form of recognition you get. Systemic chaos builds personal crisis. The toll isn’t abstract. It’s physical, emotional and measurable. ... Cybersecurity professionals are fighting two battles. One is against adversaries. The other is against a system that expects perfection, rewards self-sacrifice and punishes vulnerability.


How to Build Engineering Teams That Drive Outcomes, not Outputs

Aligning teams around clear outcomes reframes what success looks like. They go from saying “this is what we shipped” to “this is what changed” as their role evolves from delivering features to meaningful solutions. ... One way is by changing how teams refer to themselves. This might sound oversimplistic, but a simple shift in team name acts as a constant reminder that their impact is tethered to customer and business outcomes. ... Leaders should treat outcome-based teams as dynamic investments. Rigid predictions are the enemy of innovation. Instead, teams should regularly reevaluate goals, empower adaptation, and allow KPIs to evolve organically from real-world learnings. The desired outcomes don’t necessarily change, but how they are achieved can be fluid. This is how team priorities are defined, new business challenges are solved and evolving customer expectations are met. ... Breaking down engineering silos means reappraising what ownership looks like. If your team’s focus has evolved from “bug fixing” to “continually excellent user experience,” then success is no longer the domain of engineers alone. It’s a collective effort across product, design, and tech — working together as one team. ... Outcome-based teams go beyond a structural change — it’s a mindset shift. By challenging teams to focus on delivering impact, to stay aligned with evolving needs, and to collaborate more effectively, organizations can build durable, customer-centric teams that can grow, adapt, and never sit still.


Guardrails and governance: A CIO’s blueprint for responsible generative and agentic AI

Many in the industry are confusing the function of guardrails and thinking they’re a flimsy substitute for true oversight. This is a critical misconception that must be addressed. Guardrails and governance are not interchangeable; they are two essential parts of a single system of control. ... AI governance is the blueprint and the organization. It’s the framework of policies, roles, committees and processes that define what is acceptable, who is accountable and how you will monitor and audit all AI systems across the enterprise. Governance is the strategy and the chain of command. AI guardrails are the physical controls and the rules in the code. These are the technical mechanisms embedded directly into the AI system’s architecture, APIs and interfaces to enforce the governance policies in real-time. Guardrails are the enforcement layer. ... While we must distinguish between governance and guardrails, the reality of agentic AI has revealed a critical flaw: current soft guardrails are failing catastrophically. These controls are often probabilistic, pattern-based or rely on LLM self-evaluation, which is easily bypassed by an agent’s core capabilities: autonomy and composability. ... Generative AI creates; agentic AI acts. When an autonomous AI agent is making decisions, executing transactions or interacting with customers, the stakes escalate dramatically. Regulators, auditors and even internal stakeholders will demand to know why an agent took a particular action.


Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology

Age gating refers to age-based restrictions on access to online services. Age gating can be required by law or voluntarily imposed as a corporate decision. Age gating does not necessarily refer to any specific technology or manner of enforcement for estimating or verifying a user’s age. ... Age estimation is where things start getting creepy. Instead of asking you directly, the system guesses your age based on data it collects about you. This might include: Analyzing your face through a video selfie or photo; Examining your voice; Looking at your online behavior—what you watch, what you like, what you post; Checking your existing profile data. Companies like Instagram have partnered with services like Yoti to offer facial age estimation. You submit a video selfie, an algorithm analyzes your face, and spits out an estimated age range. Sounds convenient, right? ... Here’s the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don’t know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don’t know that verification systems have error rates. They don’t even seem to understand that the terms they’re using mean different things. The fact that their terminology is all over the place—using “age assurance,” “age verification,” and “age estimation” interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.


Aircraft cabin IoT leaves vendor and passenger data exposed

The cabin network works by having devices send updates to a central system, and other devices are allowed to receive only certain updates. In this system an authorized subscriber is any approved participant on the cabin network, usually a device or a software component that is allowed to receive a certain type of data. The privacy issue begins after the data arrives. Information is protected while it travels, but once it reaches a device that is allowed to read it, that device can view the entire message, including details it does not need for its task. The system controls who receives a message, but it does not control how much those devices can learn from it. The study finds that this creates the biggest risk inside the cabin. Trusted devices have valid credentials and follow all the rules, and they can examine messages closely enough to infer raw sensor readings that were never meant to be exposed. This internal risk matters because it influences how different suppliers share data and trust each other. Someone in the cabin might also try to capture wireless traffic, but the protections on the wireless link prevent them from reading the data as it travels.  ... The researchers found that these raw motion readings can carry extra clues such as small shifts linked to breathing, slight tremors or hints about a person’s body shape. Details like these show why movement data needs protection before it is shared across the cabin network.


Build Resilient cloudops That Shrug Off 99.95% Outages

If a guardrail lives only in a wiki, it’s not a guardrail, it’s an aspiration. We encode risk controls in Terraform so they’re enforced before a resource even exists. Tagging, encryption, backup retention, network egress—these are all policy. We don’t rely on code reviews to catch missing encryption on a bucket; the pipeline fails the plan. That’s how cloudops scales across teams without nag threads. ... Observability isn’t a pile of graphs; it’s a way to answer questions. We want traceability from request to database and back, structured logs that actually structure, and metrics that reflect user experience. ... Most teams benefit from a small set of “stop asking, here it is” dashboards: request volume and latency by endpoint, error rate by version, resource saturation by service, and database health with connection pools and slow query counts. We also wire deploy markers into traces and logs, so “What changed?” doesn’t require Slack archaeology. ... We don’t win medals for shipping fast; we win trust for shipping safely. Progressive delivery lets us test the actual change, in production, on a small slice before we blast everyone. We like canaries and feature flags together: canary catches systemic issues; flags let us disable risky code paths within a version. Every deployment should come with a baked-in rollback that doesn’t require a council meeting. ... Reliability with no cost controls is just a nicer way to miss your margin. We give cost the same respect as latency: we define a monthly budget per product and a change budget per release.


Anatomy of an AI agent knowledge base

“An internal knowledge base is essential for coordinating multiple AI agents,” says James Urquhart, field CTO and technology evangelist at Kamiwaza AI, maker of a distributed AI orchestration platform. “When agents specialize in different roles, they must share context, memory, and observations to act effectively as a collective.” Designed well, a knowledge base ensures agents have access to up-to-date and comprehensive organizational knowledge. Ultimately, this improves the consistency, accuracy, responsiveness, and governance of agentic responses and actions. ... Most knowledge bases include procedures and policies for agents to follow, such as style guides, coding conventions, and compliance rules. They might also document escalation paths, defining how to respond to user inquiries. ... Lastly, persistent memory helps agents retain context across sessions. Access to past prompts, customer interactions, or support tickets helps continuity and improves decision-making, because it enables agents to recognize patterns. But importantly, most experts agree you should make explicit connections between data, instead of just storing raw data chunks. ... At the core of an agentic knowledge base are two main components: an object store and a vector database for embeddings. Whereas a vector database is essential for semantic search, an object store checks multiple boxes for AI workloads: massive scalability without performance bottlenecks, rich metadata for each object, and immutability for auditability and compliance.


Trust, Governance, and AI Decision Making

Issues like bias, privacy, and explainability aren’t just technical problems requiring technical solutions. They have to be understood by everyone in the business. That said, the ideal governance structure depends on each company’s business model. ... The word ethics can feel very far from a developer’s everyday world. It can feel like a philosophical thing, whereas they need to write code and build solutions. Also, many of these issues weren’t part of their academic training, so we have to help them understand. ... Kahneman’s idea is that humans use two different cognitive modes when we make decisions. For everyday decisions and small, familiar problems—like riding a bicycle—we use what he called System One, or Thinking Fast, which is automatic and almost unconscious. In System Two, or Thinking Slow, we have this other way of making decisions that requires a lot of time and attention, either because we are confronted with a problem that’s not familiar to us or because we don’t want to make a mistake. ... We compare Thinking Fast to the data-driven machine learning approach—just give me a lot of data, and I will give you the solution without showing you how I got there or even being able to explain it. Thinking Slow, on the other hand, corresponds to a more traditional, rule-based approach to solving problems. ... It’s similar to what we see with agentic AI systems—the focus is not on any one solver, agent, or tool but rather in the governance of the whole system. 


The Global Race for Digital Trust: Where Does India Stand?

In the modern hyperconnected world, trust has replaced convenience as the true currency of digital engagement. Every transaction, whether on a banking app or an e-governance portal, is based on an unspoken belief: systems are secure and intentions are transparent. Nevertheless, this belief remains under constant pressure. ... India’s digital trust framework is further significantly reinforced with the inauguration of the National Centre for Digital Trust (NCDT) in July 2025. Established by the Ministry of Electronics and Information Technology (MeitY), this Centre serves as the national hub for digital assurance. It unites key elements, including public key infrastructure, authentication as well as post-quantum cryptography under a unified mission. This, in turn, signals the country’s commitment to treating trust as a public good. ... For firms and government agencies alike, compliance signals maturity. It reassures citizens that the systems they rely on, from hospital monitoring networks to smart city command centres, are governed by clear, ethical and verifiable standards. It also encourages global partners that India’s digital infrastructure can operate efficiently throughout jurisdictions. In the long run, this “compliance premium” could well define which countries earn the confidence to lead the global digital economy. ... The world will measure digital strength not by how fast technology advances, but by how deeply trust is embedded within it.


The privacy paradox is turning into a data centre weak point

While consumers’ failure to adopt basic cyber hygiene might seem like a personal problem, it has wide-reaching implications for infrastructure providers. As cloud services, hosted applications and mobile endpoints interact with backend systems, poor user behaviour becomes an attack vector. Insecure credentials, password reuse and unsecured mobile devices all provide potential entry points, especially in hybrid or multi-tenant environments. ... Putting data centres on an equal footing as water, energy and emergency services systems, will mean the data centre sector can now expect greater Government support in anticipating and recording critical incidents. This designation reflects their strategic importance but also brings greater regulatory scrutiny. It also comes against the backdrop of the UK Government’s Cyber Security Breaches Survey in 2024, which reported that 50% of businesses experienced some form of cyber breach in the past 12 months, with phishing accounting for 84% of incidents. This underscores how easily compromised direct or indirect endpoints can threaten core infrastructure. ... The privacy paradox may begin at the consumer level, but its consequences are absorbed by the entire digital ecosystem. Recognising this is the first step. Acting on it through better design, stronger defaults, and user-focused education allows data centre operators to safeguard not just their infrastructure, but the trust that underpins it.

Daily Tech Digest - November 24, 2025


Quote for the day:

"Give whatever you are doing and whoever you are with the gift of your attention." -- Jim Rohn



The incredible shrinking shelf life of IT skills

IT workers have seen the half-life of IT skills compressed even more dramatically, with researchers saying some skills today go from hot to not in less than two years — sometimes mere months. It’s putting a lot of pressure on IT teams. As Anand says, “Technology is developing faster than tech workers can upskill.” Ever-quickening churn in the IT skills market is upending more than individuals’ career plans, too. It is impacting the entire IT function and the organization as a whole. That in turn is forcing CIOs, HR leaders, and other executives to devise strategies to create an environment where workers are capable of reinvention at a rapid clip. ... CIOs and IT advisers also say the shortening shelf life of skills is not experienced universally, as some organizations still have a lot of legacy tech in place. Data from the 2025 Tech Salary Report from Dice, a job-searching platform for tech professionals, hints at these dual realities. ... “Certain skills will come up very quickly and then go away very quickly, so now that person has to be seen as someone who can build up skills quickly,” he adds. Info-Tech Research Group’s Leier-Murray says CIOs must free up time for their staffers to upskill and provide more coaching to their team members to ensure they keep pace with the work demands of a modern IT shop. She and others advise CIOs to hire workers with or cultivate in existing staffers a growth mindset.
 ... “The way that everybody is working is continuously being redefined,” Jones says.


Are Organizations Overinvesting in an AI Bubble? - Part 1

Demand for generative AI reasoning is driving investment, said Arun Chandrasekaran, distinguished vice president analyst at Gartner. "These partnerships signal the model providers' insatiable need for compute to satisfy the enormous growth and usage, mainly in the consumer AI space." When asked to confirm an AI bubble, Chandrasekaran said, "It is hard to predict if there is a bubble and when it will burst. But we'll likely see a correction and shake-out among players that can't deliver value to users and build profitable growth strategies." Continuous investment with a large amount of money being invested, at high valuations for AI companies, "is unsustainable," Umesh Padval, investor, entrepreneur and former managing director of Thomvest Ventures, told Information Security Media Group. ... "Enterprises are excited about gen AI's speed of delivery. However, the punitively high cost of maintaining, fixing or replacing AI-generated artifacts such as code, content and design can erode gen AI's promised return on investments," Chandrasekaran said. "By establishing clear standards for reviewing and documenting AI-generated assets and tracking technical debt metrics in IT dashboards, enterprises can take proactive steps to prevent costly disruptions." Chandrasekaran warns about overinvestment without determining the "value path." He said organizations should realize that the expected payoff, including ROI, is much more long term, which can lead to risks.


The CISO’s greatest risk? Department leaders quitting

The trend of talented and dedicated functional security leaders quietly eyeing the exit is not an anomaly — it’s a predictable outcome of systemic issues that have been building within the profession for years, says Brandyn Fisher, V-CISO capability lead at Centric Consulting. “As CISOs, we are seeing our most critical layer of management, our directors and senior managers, burn out,’’ Fisher says. “This isn’t happening in a vacuum. It’s the result of a dangerous convergence of unrealistic expectations, resource starvation, and a fundamentally broken career model.” Security leaders operate on an unsustainable premise, Fisher says. “We expect our leaders to be right every single time, while an attacker only needs to be right once. This creates a culture of hyper-vigilance that is simply not sustainable 24/7/365.” ... Another issue is tool creep, with 40-plus security tools managing the same alerts and poor integrations, Malik says. There is also “role overload and context switching” on projects, as well as relentless audit cycles, reviews, and meetings, which Malik says leaves little time for career development. “Many organizations have a CISO plus a flat layer of ‘heads of X’” who don’t always have a clear path to moving into higher levels, she says. And CISOs are constantly asking their leaders to do more with less, Fisher adds. “As cybersecurity is still widely viewed as a cost center rather than a business enabler, budgets are the first to be slashed while the threat landscape grows exponentially,’’ he says.


Preparing for the Next Wave of AI: Agentic Workflows

Agentic AI blends intelligence and automation into a single operational layer that can manage outcomes rather than just execute steps. Instead of relying on humans to define every possible rule, agentic systems understand goals and context. They can reason through multiple inputs, choose the best path forward, and adapt as conditions change. ... Optimizing for agentic AI isn’t just about adding smarter tools, it begins re-architecting the environment those tools inhabit. Organizations that thrive will have integrated, high-quality data foundations and unified workflows. Fragmented systems or poor data hygiene can cripple an AI agent’s ability to reason effectively. For many enterprises, this means modernizing their systems of record – CRMs, ERPs, and HR platforms – that make up digital operations. Equally important is the need for well-defined guardrails. Businesses must define what good decisions look like, the limits of an agent’s autonomy, and the ethical or compliance constraints that must be followed. This balance between freedom and control is critical. Too many restrictions, and the AI can’t act usefully, but too few and it risks acting outside the organization’s intentions. ... On the flip side, unclear use cases/business value was the top answer for other respondents. While both groups cited risk and compliance concerns as a top challenge, it’s clear there’s a divide on where employees fit into the agentic AI puzzle.


The privacy tension driving the medical data shift nobody wants to talk about

Current frameworks lock data into silos. These isolated systems make it difficult to combine information across hospitals, labs, and research groups. This limits what can be learned from real-world evidence, which is especially important for improving treatments, studying outcomes, and reducing costs. ... Outdated rules can worsen inequities by limiting access to new tools and restricting research to well-funded institutions. This contradicts the principle of justice, which is meant to promote fairness and access. The authors emphasize that privacy still matters. They write that, “privacy protections exist for many reasons, addressing risks to individual patients as well as the public at large.” But they argue that privacy cannot stand alone as the primary value in a system where data powers both scientific progress and new forms of risk. ... The most significant proposal in the research is a gradual move toward an open data model. In this approach, healthcare data would be treated as a shared resource rather than locked property. Access would come with responsibilities and consequences for misuse instead of blanket restrictions on legitimate use. ... A key argument is that penalties should target bad behavior rather than access. Current rules assume data must be kept behind walls to prevent harm, even though perfect anonymization is no longer possible. The researchers argue that the system should focus on preventing malicious reidentification and unethical use. This approach, they say, is more realistic and gives space for innovation. 


The expanding role of the CISO

New research from HackerOne has revealed that 84 per cent of CISOs are now responsible for AI security, while 82 per cent are charged with protecting data privacy. The result is an already burdened CISO being asked to monitor and secure technologies that are evolving at breakneck speed. New technology is constantly being implemented across businesses, and when complex technologies such as AI are adopted by 78 per cent of organisations – a 23 per cent increase from the previous year – the scale and intensity of the task become clear. This rapid adoption, often driven by different parts of the business eager for a competitive edge, creates entirely new attack surfaces which must remain under constant surveillance to ensure no security risks go unnoticed. For a CISO, this task can seem insurmountable – even the most skilled internal teams will struggle if they lack the specialised knowledge. Faced with a variety of unique vulnerabilities, CISOs will need the right tools and support in order to keep the business safe. ... Unfortunately, the lack of talent and resources serves as a significant barrier to adopting this full-scale offensive security programme, with 39 per cent of CISOs highlighting this lack of skilled personnel as a major challenge. On a global scale, the cybersecurity industry urgently needs around four million more professionals to bridge the current gap in key roles. However, taking a crowdsourced security approach offers a powerful, scalable solution for businesses to tackle this problem. 


A Day in the Life of a Connected Patient: How Real-Time Data Is Powering Smarter Care

Health data arrives in bursts and fragments. It comes from different tools, moves at different speeds, and rarely follows the same format. Making sense of it all takes more than storage. It takes design that expects disorder—and knows how to organize it. Data pipelines help bridge this complexity. They link together systems like EHRs, insurance claims, wearables, and diagnostic tools—so that the information can move securely and consistently. Standards like HL7 and FHIR help make these handoffs work, even across aging platforms. As the data moves, it’s shaped into something usable. Behind the scenes, it’s cleaned, structured, and enriched before reaching analytics teams or clinical systems. The work happens in moments, but its impact is lasting. ... Discharge no longer means disconnection. For patients managing chronic conditions, remote care programs have changed what happens after they leave the hospital. One such initiative pulled continuous data from wearables, implants, and diagnostic devices into a secure cloud system. Care teams could monitor trends, identify risks early, and step in before issues got worse. In patients with chronic conditions, timely support made a measurable difference. Readmissions dropped by almost 40%. Simple check-ins and reminders helped people stay on course—not through pressure, but with steady, well-timed guidance. At scale, the results were even clearer. For every 10,000 patients, the program saved more than USD 1 million a year. 


Micro-Frontends: A Sociotechnical Journey Toward a Modern Frontend Architecture

As organisations demand faster delivery, greater autonomy, and continuous modernisation, our frontend architectures must evolve in step with our teams. The distributed frontend era is here, but it’s not defined by new frameworks or fancy tooling. It’s defined by the way we align people, processes, and architecture around a shared goal: delivering value faster without losing control. ... Micro-frontends are often introduced as a technical pattern - a way to break a large frontend into smaller, independently deployable pieces. But that framing misses the point. Micro-frontends are not a new stack; they are a new way of structuring work. They represent a sociotechnical shift - one that mirrors Conway’s Law, which tells us that system design reflects communication structures. When teams are forced to coordinate through a single release train, decision-making slows. When every change requires syncing across multiple domains, creativity fades. The result is not just technical debt but organisational inertia. Micro-frontends reverse that dynamic. They allow teams to own slices of the product end-to-end - domain, design, delivery - without waiting for centralised approval. ... But micro-frontends are not a silver bullet. For small teams or products with limited complexity, the overhead might outweigh the benefits. The goal is not to adopt a pattern for its own sake but to solve concrete problems: delivery bottlenecks, scaling limits, and the inability to modernise safely.


Software Testing in the AI Era - Evolving Beyond the Pyramid

The past few years have seen a radical departure from the previous approach with the shift to LLM-based tools. Ideally, each approach to automation should not only meet code coverage goals, but also integrate seamlessly with industrial-scale continuous deployment workflows as a matter of practical purposes. The latter wasn’t really the case until AI came along. ... Despite the underlying strategy, search algorithms contain a key component - the “fitness function,” i.e., the goal criteria used to guide the algorithm towards better solutions. Code coverage, though simplistic, is an often-used metric to gauge how good a software testing suite is, and is therefore a commonly used fitness function when generating tests using search algorithmic approaches. In practical applications of this technique, several open source tools have been developed, with EvoSuite being a popular option using a genetic-algorithm approach to generate unit tests for Java code. ... Test generation can be considered a subfield under LLMSE, with the key components of an LLM-based test generation strategy including inputs such as the code under test, prompt generators, test validation, and prompt refiners to tune and refine the generated tests in a feedback loop. Compared to search-based strategies, this technique is still in its infancy but has gained traction since tests generated using prompt refining on predictive AI output human-readable tests requiring little post-processing.


The rise (and fall?) of shadow AI

“The security surface extends far beyond traditional concerns. For AI systems, the model and data become the primary attack vectors,” said Meerah Rajavel, chief information officer at Palo Alto Networks, on the company’s own blog. “While frontier models from providers like Google and OpenAI carry lower risk due to extensive testing, most AI applications incorporate multiple specialised models.” ... “Organisations must scan models for vulnerabilities, manage permissions appropriately and protect data access. Runtime security becomes critical because prompts function like code and the LLM acts as an operating system. That has to be protected like a software supply chain,” said Rajavel. ... Shadow AI detection and control is a growing marketplace. Other vendors that operate here include Netskope with its Netskope One platform, which includes AI security capabilities to detect shadow AI usage. Not exactly a like-for-like competitor but still in the same core operational arena, the SaaS management toolset from Zylo is built to discover and manage all their SaaS applications, including unauthorised AI tools, by centralising data, risk scores and usage. “To address the risk [of shadow AI], CIOs should define clear enterprise-wide policies for AI tool usage, conduct regular audits for shadow AI activity and incorporate GenAI risk evaluation into their SaaS assessment processes,” said Arun Chandrasekaran at magical analyst house Gartner.