Daily Tech Digest - November 29, 2025

Quote for the day:

"Whenever you see a successful person you only see the public glories, never the private sacrifices to reach them." -- Vaibhav Shah



The Cost of Doing Nothing: Why Unstructured Data Is Draining IT Budgets

Think of it this way: the fundamental problem contemporary enterprises have with unstructured data isn’t actually the volume they own but the lack of visibility into what exists, where it resides, who owns it, and whether it still holds value. In this context, the only alternative they have is to store everything indefinitely, including redundant, obsolete, or trivial data that serves no business purpose. The key question here, of course, is how to manage data through its lifecycle? Ideally, an effective and strategic data management process should begin by establishing a single, enterprise-wide view of unstructured data to uncover inefficiencies and risks.  ... Lifecycle management plays a central role in this, with files that have not been accessed for an extended period of time can be moved to lower-cost storage, while data that has been inactive for many years can be archived or deleted altogether. Many organizations discover that more than 60% of their stored information falls into these categories, illustrating just how much wasted capacity can be reclaimed with a policy-driven approach. ... It’s an approach that also benefits from the integration of vendor-neutral data management platforms capable of integrating data across diverse storage environments and clouds, eliminating lock-in while maintaining scalability. The outcome is greater cost control, improved compliance posture, and stronger decision-making foundations across the enterprise.


Agentic AI is supercharging the deepfake crisis: How companies can take action

As agentic AI propels fraud to a whole new level, the best way to keep your company secure is by fighting fire with fire, or in this case, AI with AI. To do so, companies need to implement multi-layered AI defense strategies that make it exponentially harder for bad actors to succeed. Enterprises can’t rely on traditional verification methods that add more layers of friction or collect more personal data as that would deter customers. Instead, businesses need to rethink digital identity protection to reduce fraud and fraud-related losses, but to also preserve customer trust and digital engagement. To achieve this, organizations’ defense systems should contextualize individual actions, granularly isolate scopes of impact, and rely on ongoing reassessments of authorization. In other words, a highly secure system doesn’t just check a user’s identity once but continuously evaluates what the user is doing, where they are doing it, and why they are doing it. ... Using layered risk signals throughout the lifecycle of users—not just during onboarding— can provide companies with detailed information on potential risks, especially from internal sources like employees who can be fouled or whose access can be hijacked to compromise a company’s key assets. Companies can continuously check the reputation of users’ email addresses, phone numbers, and IP addresses to see if any of those channels have previously been used for fraudulent activity, identifying fraud rings that are deploying AI agents at scale. 


Cyber resilience, AI & energy shape IT strategies for 2026

The historical approach - that of considering cyber resilience as a stand-alone issue, where one vendor can protect an entire company - will be put to bed. Organisations will move away from using point solutions and embrace the wider ecosystem of options as understanding grows that they can't go it alone. An interconnected framework can help prevent a ripple effect when an attack happens - users should be able to identify and halt an attack in progress. The rate and scale of attacks will continue and having a properly integrated framework is vital to mitigate risk and speed up recovery. ... As AI inference workloads are becoming part of the production workflow, organisations are going to have to ensure their infrastructure supports not just fast access but high availability, security and non-disruptive operations. Not doing this will be costly both from a results perspective and an operational perspective in terms of resource (GPUs) utilisation. ... By 2026, organisations will face a new problem: accounts and credentials that belong to people no longer with the company, but which still look and act like insiders. As HR and IT systems become more automated, old identities are easily missed. Accounts from former employees, departed contractors, and dormant service bots will linger in cloud environments and company software. Attackers will exploit these 'digital ghosts' because they appear legitimate, bypass automated offboarding, and blend in with normal system activity.


6 coding myths that refuse to die

A typical day as a developer can feel like you’re juggling an array (no pun intended) of tasks. You’re reading vague requirements, asking questions, reviewing designs, planning architecture, investigating bugs, reading someone else's code, writing documentation, attending standups, and occasionally, you actually get to write code. Why? Because software development is about problem-solving, not just code-producing. Real-world problems are messy. Users don’t always know what they want. Clients change their minds. Systems behave in mysterious ways. Before you even think about writing code, you often need to untangle the people-side and the process-side. ... The truth is that coding rewards persistence, curiosity, and willingness to improve far more than raw talent. Most developers I’ve worked with weren’t prodigies. They were people who kept showing up, kept asking questions, and kept refining their skills. ... Every working developer, no matter how experienced, looks up syntax constantly. We search the docs, we skim examples, we peek at old code, we search for things we’ve forgotten. Nobody expects you to memorize every keyword, operator, or built-in function. What matters in programming is the ability to break down a problem, think through the logic, and design a solution. Syntax is simply the tool you use to express that solution. It’s the grammar, not the message. So don't make this programming mistake and myth waste your time. 


Enterprises are neglecting backup plans, and experts warn it could come back to haunt them

Crucially, only 45% consistently follow the ‘3-2-1’ backup rule - three copies of data, stored on two different media types, with one copy kept off-site. The same number are failing to keep tamper-proof copies by using immutability across all their organizational backup data to ensure resilience against cyber attacks. ... "Most organizations now recognize the need to identify phishing scams or social engineering tactics; however, we can’t lose sight of what to do when disaster does strike. While complete prevention is near impossible, assurance of rapid recovery is fully within organizational control," he said. "Our research shows that UK organizations still aren’t taking adequate precautions when it comes to data backups. By storing data on immutable platforms, they can ensure business-critical information remains beyond the reach of adversaries and that operations stay up and running, even when systems are compromised." ... Backup strategies are now front of mind for many IT professions, alternative research shows. A survey from Kaseya earlier this year found 30% are losing sleep over lackluster backup and recovery strategies, with some pushing for a stronger focus on this area. Complacency was also identified as a recurring problem for many enterprises, according to Kaseya. Nearly two-thirds (60%) of respondents said they believed they could fully recover from a data loss incident in the space of a day.


Ransomware Moves: Supply Chain Hits, Credential Harvesting

Attack volume remains high. The quantity of victims listed across ransomware groups' data leak sites increased by one-third from September to October, says a report from cybersecurity firm Cyble. Groups listing the most victims included high-fliers Qilin and Akira, newcomer Sinobi - which only appeared in July - and stalwarts INC Ransom and Play. ... After a run of attacks targeting zero-day flaws in managed file transfer software, the group used the same strategy against Oracle E-Business Suite versions 12.2.3 through 12.2.14 to steal data. Clop appears to have targeted two zero-day vulnerabilities, "both of which allow unauthenticated access to core EBS components," giving the group "a fast and reliable entry point, which explains the scale of the campaign," said cybersecurity firm SOCRadar. Oracle issued updates fixing both of those flaws. Data theft tied to that campaign appeared to begin by August, although it didn't come to light until Clop revealed it ... One of the big reasons for ransomware's success has been cryptocurrency, which makes it easier for groups to monetize and cash out their attacks. Another has been the rise of the ransomware-as-a-service business model. This allows for specialization: operators can develop malware and shake down victims, while affiliated business partners focus on hacking, rather than malware development, with both reaping the rewards. Every time a victim pays a ransom, the industry standard is for an affiliate to keep 70% to 80%.


Essential 2026 skills that DevOp leaders need to prioritize

It may sound radical, but you should prepare for a future where DevOps professionals will no longer need to learn programming languages. The DevOps role will shift up more than most people expect, enabling your team members to become supervisory architects rather than hands-on coders. ... DevOps professionals will no longer need to rely on programming languages. Instead, they will use natural language to supervise and orchestrate processes across requirements, planning, development, testing, and deployment. This leads to the elimination of hand-offs between teams and a significant blurring of traditional roles. ... However, for this shift-up to be truly successful and safe in practice, that foundational knowledge of software engineering principles remains vital. Without understanding the why behind what you are asking AI to do, your team cannot evaluate the quality of the output. This lack of evaluation can easily lead to significant risks, such as vulnerabilities that result in security breaches. In the age of AI, human judgment remains as important as ever, but only if it’s informed by a deep understanding of what the AI is being asked to produce. ... As a leader, your challenge is to guide your organization through this transformative period. The future of software development isn’t about AI replacing humans; it’s about AI empowering humans to perform at a higher, more strategic level. 


Building the Future: AI’s Role in Enterprise Evolution

The biggest obstacle we see for AI adoption isn't the technology itself, but the lack of clarity on the purpose for using it. The most critical part of any AI initiative is to understand why you want to use AI and how it can enhance your organisation’s unique attributes. There is no one-size-fits-all approach, since what works for one organisation may not work for others. A healthcare business needs data privacy for patient records, while a small startup’s goal is agility to release new product and sign new deals. These use cases will require different infrastructure investments and most workloads are not suited to the public cloud. ... Consider AI with a broader view, beyond just the technology itself. Dell approaches AI with three distinct perspectives in mind: the business side, the technical side and the people side. GenAI will provide a 20-30 per cent increase in productivity, eliminating mundane tasks and freeing people to focus on higher value work. Your employees are now available to use that extra time to reimagine processes and outcomes, creating value and efficiencies for the company.
From a people standpoint, the demand for curious, smart, adaptable employees will skyrocket. ... Many of our customers are in the early stages of their AI journey, experimenting with basic applications. Small and basic can have a big impact, so keep pushing forward. It's worth starting with pilot projects as they give you room to test and experiment with an application. 


We Need to Teach the ‘Inuit’ Mindset to Young Computing Engineers

Becoming accustomed to over-provisioned resources has brought further concerns. The decreasing cost of hardware encourages a certain complacency: if a code is inefficient in memory or CPU usage, one tends to trust that a more powerful machine or extra memory will solve the problem. ... This mindset contrasts with the traditional discipline of programming education, in which every instruction and every byte mattered, and optimization was an essential part of the computer science student’s training. The point here is that even while leveraging the benefits offered by AI in programming, an excessive dependence on AI-generated solutions and the over-provisioning of resources can undermine the proper development of computational, logical, and algorithmic thinking in future programmers or computing scientists. ... It is important to clarify that this is not about rejecting the use of AI and reverting to a former era of computing. Instead, we should integrate the best of both worlds. We must harness the tremendous potential of AI while instilling in students the ability to evaluate and improve solutions using their own sound judgement. As a direct consequence, a well-trained programmer will think twice before accepting an AI-generated solution if it uses resources disproportionately or does not guarantee adequate resilience when execution scenarios change drastically. 


Your Platform is Not an Island: Embracing Evolution in Your Ecosystem

The challenges facing smaller organizations versus larger organizations are really quite different, and the very requirement for a platform is typically indicative of you having multiple teams, so you probably don't really need a platform in a startup, particularly if you've got one 10-star full-stack developer wearing all of those hats. ... On-premises dependencies for your app will increase the number of interfaces and contributes to what we lovingly call application sprawl, and overly distributed architectures. The more teams that you have, the more people that you're probably going to need to speak to, and unfortunately, that means an increased number of working practices, and probably it's going to be far harder to reach any kind of consensus. If you work in a large organization, I'm sure that will resonate with you. ... The more features that you try to predict ahead of time, the more you risk building something that your customers actually don't want. The more minimal your MVP, the more likely your customers will see it as a motel, not a hotel. ... Developers still needed infrastructure knowledge, when we'd kind of sold that vision that they wouldn't need any, they would need little baseline understanding of Kubernetes. Integration with other legacy services across the organization, because they weren't designed by us and didn't always have APIs, was a little bit clunky. 

Daily Tech Digest - November 28, 2025


Quote for the day:

"Whenever you find yourself on the side of the majority, it is time to pause and reflect." -- Mark Twain



Security researchers caution app developers about risks in using Google Antigravity

“In Antigravity,” Mindgard argues, “’trust’ is effectively the entry point to the product rather than a conferral of privileges.” The problem, it pointed out, is that a compromised workspace becomes a long-term backdoor into every new session. “Even after a complete uninstall and re-install of Antigravity,” says Mindgard, “the backdoor remains in effect. Because Antigravity’s core intended design requires trusted workspace access, the vulnerability translates into cross-workspace risk, meaning one tainted workspace can impact all subsequent usage of Antigravity regardless of trust settings.” For anyone responsible for AI cybersecurity, says Mindguard, this highlights the need to treat AI development environments as sensitive infrastructure, and to closely control what content, files, and configurations are allowed into them. ... Swanda recommends that app development teams building AI agents with tool-calling: assume all external content is adversarial. Use strong input and output guardrails, including tool calling; Strip any special syntax before processing; implement tool execution safeguards. Require explicit user approval for high-risk operations, especially those triggered after handling untrusted content or other dangerous tool combinations; not rely on prompts for security. System prompts, for example, can be extracted and used by an attacker to influence their attack strategy. 


How AI Is Rewriting The Rules Of Work, Leadership, And Human Potential

When a CEO tells his team, "AI is coming for your jobs, even mine," you pay attention. It is rare to hear that level of blunt honesty from any leader, let alone the head of one of the world's largest freelance platforms. Yet this is exactly how Fiverr co-founder and CEO Micha Kaufman has chosen to guide his company through the most significant technological shift of our lifetimes. His blunt assessment: AI is coming for everyone's jobs, and the only response is to get faster, more curious, and fundamentally better at being human. ... We're applying AI to existing workflows and platforms, seeing improvements, but not yet experiencing the fundamental restructuring that's coming. "It is mostly replacing the things we used to do as human beings, acting as robots," Kaufman observes. The repetitive tasks, the research gathering, the document summarizing, these elements where humans brought judgment but little humanity are being automated first. ... It's not enough to use the obvious AI tools in obvious ways. The real value emerges from those who push boundaries, combine systems creatively, or bring exceptional judgment to AI-assisted workflows. Kaufman points to viral videos created with advanced AI tools, noting that their quality stems not from the AI itself but from the operator's genius, experience, creativity, and taste developed over years.


How ‘digital twins’ could help prevent cyber-attacks on the food industry

A digital twin is a virtual replica of any product, process, or service, capturing its state, characteristics, and connections with other systems throughout its life cycle. The digital twin will include the computer system used by the company. It can help because conventional defences are increasingly out of step with cyber-attacks. Monitoring tools tend to detect anomalies after damage occurs. Complex computer systems can often obscure the origins of breaches. A digital twin creates a bridge between the physical and digital worlds. It allows organisations to simulate real-time events, predict what might happen next, and safely test potential responses. It can also help analyse what happened after a cyber-attack to help companies prepare for future incidents. ... A digital twin might be able to avert disaster under this scenario. By combining operational data such as temperature, humidity, or the speed air of flow with internal computing system data or intrusion attempts, digital twins offer a unified view of both system performance and cybersecurity. They enable organisations to simulate cyber-attacks or equipment failures in a safe, controlled digital environment, revealing vulnerabilities before attackers can exploit them. A digital twin can also detect abnormal temperature patterns, monitor the system for malicious activity, and perform analysis after a cyber-attack to identify the causes.


Why password management defines PCI DSS success

When you dig into real incidents involving payment data, a surprising number come down to poor password hygiene. PCI DSS v4.0 raised the bar for authentication, and the responsibility sits with security leaders to turn those requirements into workable daily habits for users and admins. ... Requirement 8 asks organizations to verify the identity of every user with strong authentication, make sure passwords and passphrases meet defined strength rules, prevent credential reuse, limit attempts, and store credentials securely. Passwords need to be at least 12 characters long, or at least 8 characters when a system cannot support longer strings. These rules line up with guidance from NIST SP 800 63B, which recommends longer passphrases, resistance against common word lists and hashing methods that protect stored secrets. ... PCI DSS requires that access be traceable to an individual and that shared accounts be minimized and controlled. When passwords live across multiple channels, it becomes nearly impossible to show auditors reliable evidence of access history. Even if the team is trying hard, the workflow itself creates gaps that no policy document can fix. ... Some CISOs view password managers as convenience tools. PCI DSS v4.0 shows that they are closer to compliance tools because they make it possible to enforce identity controls across an organization.



AI fluency in the enterprise: Still a ‘horseless carriage’

Companies are tossing AI agents onto existing processes, but a transformative change — where AI is the boss — is still far away. That was the view of IT leaders at this year’s Microsoft Ignite conference who’ve been putting AI agents to work, mostly with legacy processes. The IT leaders discussed their efforts during a conference panel at the event earlier this month. “We’re probably living in some version of the horseless carriage — we haven’t got to the car yet,” said John Whittaker, director of AI platform and products at accounting and consulting firm EY. ... Pfizer is very process-centric, he said, stressing that the goal is not to reinvent processes right out of the gate. The company is analyzing how AI works for them, gaining confidence in the technology before reorganizing processes within the AI lens. “Where we’re definitely heading … is thinking about, ‘I’ve solved this process, I’ve been following exactly the way it exists today. Now let’s blow it up and reimagine it…’ — and that’s exciting,” he said. ... Lumen is now looking at where it wants the business to be in 36 months and linking it to AI agents and AI-native plans. “We’re … working back from that and ensuring that we have the right set of tools, the right set of training, and the right set of agents in order to enable that,” he said. Every new Lumen employee in Alexander’s connected ecosystem group gets a Copilot license. The technology has helped speed up the process of understanding acronyms and historical trends within the company.


Creating Impactful Software Teams That Continuously Improve

When you are a person who prefers your job to be strictly defined, with clear boundaries, then you feel supported instead of stifled by a boss who checks in on you regularly. In the same culture, you will feel relaxed, happy, and content, which will in turn allow you to bring your best to your job and deliver to your strengths, Žabkar Nordberg said. You do not want to have employees who will be extensions of yourself, Žabkar Nordberg said. Instead, you want people who will bring their own thoughts, their own solutions, and in many ways be different and better than yourself. ... Provide guidance, step away, and let people have autonomy within those constraints. You might say something like "I would like you to focus on improving our customer retention. Be aware that legal regulations require all steps in our current onboarding journey to be present, but we have flexibility in how we execute them as the user experience is not prescribed". This gives people guidance and focuses them, but still gives them the autonomy to bring their own experiences and find their own solutions. ... We want people to show initiative and proactively bring their own thoughts, improvements, and worries. Clear communication and an understanding of how people work will help them do that, Žabkar Nordberg said. Psychological safety underlines trust, autonomy, and communication; it is required for them to work effectively, he concluded.


Empathetic policy engineering: The secret to better security behavior and awareness

Insecure behavior is often blamed on users, when the problem often lies in the measure itself. In IT security research, the focus is often on individual user behavior — for example, on whether secure behavior depends on personality traits. The question of how well security measures actually fit the reality of work — that is, how likely they are to be accepted in everyday practice — is neglected. For every threat, there are usually several available security measures. But differences in effort, acceptance, compatibility, or complexity are often not taken into account in practice. Instead, security or IT departments often make decisions based solely on technical aspects. ... Safety measures and guidelines are often communicated in a way that doesn’t resonate with users’ work reality because they don’t aim to engage employees and motivate them: for example, through instructions, standard online training, or overly playful formats like comics that employees don’t take seriously. ... The limited success of many security measures is not solely due to the users — often it’s unrealistic requirements, a lack of involvement, and inadequate communication. For security leaders, this means: Instead of relying on education and sanctions, a strategic paradigm shift is needed. They should become a kind of empathetic policy architect whose security strategy not only works technically but also resonates on a human level.


Agentic AI is not ‘more AI’—it’s a new way of running the enterprise

Agentic AI marks a shift from simply predicting outcomes or offering recommendations to systems that can plan tasks, take actions and learn from the results within defined guardrails. In practical terms, this means moving beyond isolated, single-task copilots towards coordinated “swarms” of agents that continually monitor signals, trigger workflows across systems, negotiate constraints and complete loops with measurable outcomes. ... A major barrier is trust and control. Leaders remain cautious about allowing software to take autonomous actions. Graduated autonomy provides a path forward: beginning with assistive tools, moving to supervised autonomy with reversible actions and eventually deploying narrow, fully autonomous loops when KPIs and rollback mechanisms have been validated. Lack of clarity on value is another obstacle. Impressive demonstrations do not constitute a strategy. Organisations should use a jobs-to-be-done perspective and tie each agent to a specific financial or risk objective, such as days-sales-outstanding, mean time to resolution, inventory turns or claims leakage. Analysts have warned that many agentic initiatives will be cancelled if value remains vague, so clear scorecards and time-boxed proofs of value are essential. Data readiness is a further challenge. Weak lineage, uncertain ownership and inconsistent quality stop AI scaling efforts in their tracks.


6 strategies for CIOs to effectively manage shadow AI

“Be clear which tools and platforms are approved and which ones aren’t,” he says. “Also be clear which scenarios and use cases are approved versus not, and how employees are allowed to work with company data and information when using AI like, for example, one-time upload as opposed to cut-and-paste or deeper integration.” ... “The most important thing is creating a culture where employees feel comfortable sharing what they use rather than hiding it,” says Fisher. His team combines quarterly surveys with a self-service registry where employees log the AI tools they use. IT then validates those entries through network scans and API monitoring. ... “Effective inventory management requires moving beyond periodic audits to continuous, automated visibility across the entire data ecosystem,” he says, adding that good governance policies ensure all AI agents, whether approved or built into other tools, send their data in and out through one central platform. ... “Risk tolerance should be grounded in business value and regulatory obligation,” says Morris. Like Fisher, Morris recommends classifying AI use into clear categories, what’s permitted, what needs approval, and what’s prohibited, and communicating that framework through leadership briefings, onboarding, and internal portals. ... Transparency is the key to managing shadow AI well. Employees need to know what’s being monitored and why.


It’s Time to Rethink Access Control for Modern Development Environments

When faced with the time-consuming complexity of managing granular permissions across dozens of development tools, most VPs of Engineering and CTOs opt for the path of least resistance, granting broad administrative privileges to entire engineering teams. It’s understandable from a productivity standpoint; nobody wants to be a bottleneck when a critical release is imminent, or explain to the CEO why they missed a market window because a developer couldn’t access a repository. However, when everyone has admin privileges, attackers who gain access to just one set of credentials can do tremendous damage. They gain not just access to sensitive code and data, but the ability to manipulate build processes, insert malicious code, or establish persistent backdoors. This problem becomes even more dangerous when combined with the prevalence of shadow IT, non-human identities, and contractor relationships operating outside your security perimeter. ... The answer to stronger security that doesn’t hinder developer productivity lies in implementing just-in-time permissioning within the SDLC, a concept successfully adopted from cloud infrastructure management that can transform how we handle development access controls. The approach is straightforward: instead of granting permanent administrative access to everyone, take 90 days to observe what developers actually need to do their jobs, then right-size their permissions accordingly. 

Daily Tech Digest - November 27, 2025


Quote for the day:

“Let no feeling of discouragement prey upon you, and in the end you are sure to succeed.” -- Abraham Lincoln


The identity mess your customers feel before you do

Over half of organizations rely on developers who are not specialists in authentication. These teams juggle identity work alongside core product duties, which leads to slow progress, inconsistent implementation, and recurring defects. Decision makers admit that they underestimate the time developers spend on authentication. In many organizations, identity work drops down the backlog until a breach, an outage, or lost revenue forces renewed attention. Context switching is common. Developers move between authentication, compliance requirements, and product enhancements, which increases the likelihood of mistakes and slows delivery. ... Authentication issues undermine revenue as well as security. Organizations report that user dropoff during login, delays in engineering delivery, and abandoned transactions stem from outdated authentication flows. These issues rarely show up as a single budget line, but they accumulate into lost revenue and higher operating costs. ... Agentic AI is set to make customer identity more complicated. Automated activity will increase on every front, from routine actions taken on behalf of legitimate users to large scale attacks that target login and account creation flows. Security teams will face more traffic to evaluate and less certainty about what reflects user intent. Attackers will use AI to run high volumes of account takeover attempts and to create synthetic identities that blend in with normal behavior.


Bank of America's Blueprint for AI-Driven Banking

Over the past decade, Bank of America has invested more than $100 billion in technology. "Technology is a strategic enabler that now allows AI and automation to expand across every part of the organization, stretching from consumer services to capital markets," Bank of America CEO Brian Moynihan said. This focus on scale also shapes how the bank approaches gen AI. ... The bank's decade-long AI effort now supports 58 million interactions each month across customer support, transactions and informational requests. Erica has also become an internal platform. Erica for Employees has "reduced calls into the IT service desk by 50%," Bank of America said. This internal role matters because it shows how a consumer-grade AI system can evolve into an enterprise asset - one that assists with IT queries, operational troubleshooting and employee guidance across large distributed teams. ... The bank's CashPro Data Intelligence suite includes AI-driven search, forecasting and insights, and recently won the "Best Innovation in AI" award. These capabilities bring predictive analytics directly into the operational core of corporate treasury teams. By analyzing behavioral cash flows, transaction histories, seasonality and market data, the platform can generate forward-looking liquidity projections and actionable insights. For enterprises, this means fewer manual reconciliation cycles, improved liquidity planning and faster financial decision-making. 


Cybersecurity Is Now a Core Business Discipline

Cybersecurity is now a core business discipline, not an IT specialty. When a household name like Marks & Spencer can take a $400 million hit to trading profits after a major cyber incident, we’ve moved beyond “technology risk” into enterprise resilience. I often say the bad actors only need to get lucky once; defenders must be effective 24/7. That asymmetry won’t vanish. The job of leadership is to run with it; to accept the pace of the threat and build organizations that can withstand, respond, and keep moving. ... If bad actors only need to be lucky once, then your business must be designed to fail safely. That means strong identity controls, multi-factor authentication everywhere it makes sense, segmentation that limits lateral movement, and backups that are both tested and recoverable. None of this is glamorous. All of it is decisive. I’ve yet to meet a breached organization that regretted investing in the basics. Engineer for better human decisions. Traditional awareness training has diminishing returns if it’s divorced from real work. Replace generic modules with just-in-time prompts in the tools people actually use. Add controlled friction to high-risk workflows: payment changes, supplier onboarding, privileged access approvals. Normalize “pause and verify” by making it easy and expected. Culture is created by what gets rewarded and what gets made simple.


Building Your Work Digital Twin Starts With The Video You Already Have

This concept is far from new. We've already seen AI-generated assistants, virtual trainers and automated knowledge bases. But what separates a true digital twin from a chatbot or a script is the ability to capture how we communicate and not just what we say. That's where video—where tone, style, facial expression and more are clearly displayed—becomes invaluable. ... The idea of creating another you that actually delivers requires a concerted effort from both individuals and organizations. But it starts with centralizing and organizing the video content that already exists across departments, including training sessions, customer interactions, leadership updates and team calls. Assembling the video is just the start, as curating what matters is key. Prioritize videos that demonstrate clarity, professionalism and authenticity. ... As AI becomes more prevalent, authenticity, not automation, is emerging as a competitive differentiator. Customers, partners and employees still crave the sense of a real, trustworthy voice, and human digital twins give organizations a way to scale that presence. These are not fabricated influencers or AI puppets but extensions of real people, grounded in consent and context. Of course, this shift also demands ethical guardrails: clear usage boundaries, transparency about when digital twins are speaking and secure storage of identity data. When done responsibly, it can be a powerful evolution of human-machine collaboration that keeps people at the center.


AI adoption blueprint: Driving lasting enterprise value in India

The challenge that employees face towards AI adoption in Indian enterprises is not rooted in capability gaps or lack of enthusiasm, but stems from insufficient contextual understanding. Organisational experiences reveal that mandating users to move between disparate systems enables them to craft their own prompts or proactively seek AI assistance without much experience, which often results in digital friction, underutilisation or complete abandonment. These challenges intensify across diverse workforces spanning multiple languages and regions. ... Building workforce confidence around AI remains a key hurdle given the uneven distribution of AI fluency across teams—even within digitally advanced Indian IT ecosystems. Overcoming this requires embedding just-in-time learning resources tailored to user roles and scenarios directly inside the applications employees use daily. Offering interactive onboarding, scenario-based microlearning, and guidance in multiple languages not only meets users where they are but respects the linguistic and cultural diversity that characterises India’s workplaces. This approach helps alleviate hesitation, foster trust, and accelerate AI fluency across complex organisations. ... Treating adoption as a continuous process that evolves alongside workflows, user requirements, and business priorities ensures AI continues to deliver value beyond launch phases, achieving sustainable scale. 


A CIO’s 5-point checklist to drive positive AI ROI

“Start by assigning business ownership,” advises Srivastava. “Every AI use case needs an accountable leader with a target tied to objectives and key results.” He recommends standing up a cross-functional PMO to define lighthouse use cases, set success targets, enforce guardrails, and regularly communicate progress. Still, even with leadership in place, many employees will need hands-on guidance to apply AI in their daily work. ... CIOs should also view talent as a cornerstone of any AI strategy, adds CMIT’s Lopez. “By investing in people through training, communication, and new specialist roles, CIOs can be assured that employees will embrace AI tools and drive success.” He adds that internal hackathons and training sessions often yield noticeable boosts in skills and confidence. Upskilling, for instance, should meet employees where they are, so Asana’s Srivastava recommends tiered paths: all staff need basic prompt literacy and safety training, while power users require deeper workflow design and agent-building knowledge. ... The resounding point is to set metrics early on, and not fall into the anti-patterns of not tracking signals or value gained. “Measurement is often bolted on late, so leaders can’t prove value or decide what to scale,” says Srivastava. “The remedy is to begin with a specific mission metric, baseline it, and embed AI directly in the flow of work so people can focus on higher-value judgment.”


The coming storm for satellites

Although an uncommon occurrence, the list of dangers caused by space weather is daunting. In addition to atmospheric drag piercing LEO space, Earth’s radiation belt can be changed by the injection of high-energy electrons, plunging geostationary satellites at high elevations into deep-space conditions, unshielding them from the Earth’s magnetosphere. Even inside the relative protection of the planet’s orbits, radiation can damage electronics, charged particles from the sun can electrify the body of a spacecraft, potentially powering a discharge between two differently charged sections, and solar cells can be degraded faster during solar storms. A single space weather event can cause the same wear and tear as an entire year of normal operation. ... Nonetheless, the concern is “that a big solar event could disable a large number of satellites and cause a major increase in the collision risk, particularly in the very busy LEO orbit domain,” Machin says. “We need to ensure that such an event does not risk our ability to continue using space in the future. “We need to always plan for space sustainability.” Machin alludes to the danger of Kesseler Syndrome, a scenario in which debris density in low-Earth orbit becomes so great that the destruction of satellites and newly launched vehicles becomes probable, thereby multiplying debris density, resulting in unusable orbits, and trapping the human race on Earth for thousands of years.


How intelligent systems are evolving: Rob Green, CDO at Insight Enterprises

We operate on a zero-trust model and corresponding policies. An additional advantage of being a major Microsoft partner is that we received early access to ChatGPT, which we deployed internally as “InsightGPT.” We launched it early to develop AI capabilities within our services, solutions, and IT teams. We recognised the need for clear guidelines around AI usage and deployment. Our AI usage policies, first introduced two years ago, ensure employees understand how to implement and experiment with AI responsibly. These policies are continuously updated, our most recent revision was released three weeks ago. Regulatory and compliance requirements vary by region, and our policies are adapted accordingly. ... First, we ensure awareness and education across the organization. Not everyone needs to be an AI developer, but we want employees to be fluent with AI tools and understand how to use them productively. We recently launched the AI Flight Academy, which includes five proficiency levels. A large portion of employees is expected to reach advanced levels. Our mission has evolved, we aim to be a leading AI-first solutions integrator. To support this, my team is building platforms that enable agentic capabilities across shared functions such as finance, HR, IT, warehouse operations, and marketing.


Agentic HR: from static roles to growth roles with AI co pilots

When people cannot see progress, they stop stretching. In many firms the only formal feedback loop is the annual review. That is too slow for real learning and it misses the small wins that power engagement. The alternative is to treat every role as a platform for growth. You design work so that capability increases by doing the work itself. This is where agentic HR comes in. ... Co pilots should live where work already happens. That means chat, documents, code, tickets, and task boards. The system watches patterns, respects privacy settings, and offers context aware prompts. ... People facing AI must earn trust. That starts with shared governance. HR and technology leaders should set rules for data minimisation, explainability, and bias monitoring. They should also be clear on when AI recommends and when a human decides. Two reference points help. The EU AI Act introduces a risk based approach with specific duties for higher risk use cases and transparency expectations for generative systems. This shapes how enterprises should document and oversee AI that touches employees. The NIST AI Risk Management Framework provides practical guidance on mapping risks, measuring impacts, and governing models over time. It is vendor neutral and it emphasises continuous monitoring rather than one time checks. Enterprises can also look to the new ISO and IEC standard for AI management systems.


The Three Keys to AI in Banking: Compliance, Explainability and Control

When a new technology like AI enters an industry, the goals are simple: Save money, save time, and ideally, increase revenue. According to a 2023 report from McKinsey, AI has the potential to reduce operating costs in banking by 20-30% by automating manual processes, cutting down on errors and saving time. ... Finance is one of the most heavily regulated industries, and rightfully so. When you’re managing transactions and people’s hard-earned money, there is little room for error. As banks adopt AI, they need full disclosure for what is happening every step of the way. ... To close that gap, financial institutions need to prioritize not only technical accuracy but also interpretability. Investing in training, cross-functional collaboration and governance frameworks that support explainable AI will be key to long-term success. The banks that succeed will be the ones that use AI systems their regulators can audit, their teams can trust, and their customers can understand. ... Trust is the currency of this industry, which is why adoption looks different here than it does in consumer tech. Rather than rushing into full-scale adoption, many banks are starting with pilot programs that have tightly scoped risk exposure. ... Done right, AI can help institutions expand credit more inclusively, flag risks earlier and give underwriters clearer insights without sacrificing compliance.

Daily Tech Digest - November 26, 2025


Quote for the day:

“There is only one thing that makes a dream impossible to achieve: the fear of failure.” -- Paulo Coelho



7 signs your cybersecurity framework needs rebuilding

The biggest mistake, Pearlson says, is failing to recognize that the current plan is out of date or simply not working. Breaches happen, but that doesn’t always mean your cyber framework needs rebuilding. It does, however, indicate that the framework needs to be rethought and redesigned. ... “If your framework hasn’t kept pace with evolving threats or business needs, it’s time for a rebuild.” Cyber threats are always evolving, so staying proactive with regular reviews and fostering a culture of cybersecurity awareness will help catch issues before they become crises, Bucher says. ... “The cybersecurity landscape has evolved rapidly, especially with the rise of generative AI — your framework should reflect these shifts.” McLeod recommends a complete a biannual framework review combined with a cursory review during the gap years. “This helps to ensure that the framework stays aligned with evolving threats, business changes, and regulatory requirements.” Ideally, security leaders should always have their security framework in mind while maintaining a rough, running list of areas that could be improved, streamlined, or clarified, McLeod suggests. ... If an organization is stuck in a cycle of continually chasing alerts and incidents, as well as reporting events after the fact instead of performing predictive threat assessments, data analysis, and forward planning, it’s time for a change, Baiati advises. 


Your Million-Dollar IIoT Strategy is Being Sabotaged by Hundred-Dollar Radios

The ambition is clear: to create hyper-efficient, data-driven operations in a market expected to exceed $1.6 billion by 2030. Yet, a fundamental paradox lies at the heart of this transformation. While we architect complex digital twins and deploy sophisticated AI models, the foundational tools entrusted to our most valuable asset—the frontline workforce—are often decades old, disconnected, and failing at an alarming rate. ... Data shows that one in four organizations loses more than an entire day of productivity every month simply dealing with broken technology. The primary culprits are as predictable as they are preventable: nearly half of workers cite battery problems (48.4%) and physical damage (46.8%) as the most common causes of failure. ... While conversations about this crisis often focus on pay and career paths, Relay’s research reveals a more immediate, tangible cause: the daily frustration of using broken tools. 1 in 4 frontline workers already feel their equipment is second-class compared to what their corporate counterparts use, and a staggering 43% of workers saying they’d be less likely to quit if guaranteed access to modern, automatically upgraded devices. ... Beyond reliability, it’s important to address the data black hole created by legacy, disconnected tools. Every day, frontline teams generate thousands of hours of spoken communication—a rich stream of unstructured data filled with maintenance alerts, safety concerns, and process bottlenecks. 


Ask the Experts: Validate, don't just migrate

"Refactoring code is certainly a big undertaking. And if you start before you have good hygiene and governance, then you're just setting yourself up for failure. Similarly, if you haven't tagged properly, you have no way to attribute it to the project, and that becomes a cost problem." ... "If you do conclude [that migration is necessary], then you really must make sure the application is architected right. A lot of times, these workloads weren't designed for the cloud world, so you must adapt them and deliberately architect them for a cloud workload. "[To prepare a mission-critical application], it's key to look at the appropriateness, operating system [and] licenses. Sometimes, there are licenses tied to CPUs or other things that might introduce issues for you as well, so regression, latency and performance testing will be mandatory. ... "[IT leaders must also understand] the risks and costs associated with taking things into the cloud, and the pros and cons of that versus leaving it alone. Because old stuff, whether it was [procured] yesterday or five years ago, is inherently going to be vulnerable from a cybersecurity standpoint. Risk No. 2 is interoperability and compatibility, because old stuff doesn't talk to new stuff. And the third one is supportability, because it's hard to find old people to support old systems. ... "Sometimes, people have the false sense that if it's in cloud, then I'm all set. Everything is available, and everything is highly redundant. And it is, if you design [the application] with those things in mind.


Heineken CISO champions a new risk mindset to unlock innovation

Starting as an auditor and later leading a cyber defense team. It’s easy to fall into the black-and-white trap of being the function that always says “no” or speaks in cryptic tech jargon. It’s a scary world out there with so many attacks happening in every industry. The classical reaction of most security professionals is to tighten defences and impose even more rules. ... CISOs need to shift the mindset from pure compliance to asking: How does our cyber strategy support the business and its values? What calculated risks do we want the business to take? Where do we need their attention and help to embed security into the DNA of our people and our company? ... Be visible and approachable. Share the lessons that shaped you as a leader, what worked, what didn’t, and the principles that guide your decisions. I’m passionate about building diverse teams where everyone gets the same opportunities, no matter age, gender, or background. Diversity makes us stronger, and when there’s trust and openness, it sparks mentoring, coaching, and knowledge sharing. Make coaching and mentoring non-negotiable, and carve out time for it. It’s easy to push aside when you’re busy putting out security fires, but neglecting people’s growth and well-being is a big miss. Be authentic and vulnerable, walk the talk. Share the real stories, including failures and what made you stronger. Too often, people focus only on titles, certifications, and tech skills.


Data-Driven Enterprise: How Companies Turn Data into Strategic Advantage

A data-driven enterprise is not defined by the number of dashboards or analytics tools it owns. It’s defined by its ability to turn raw information into intelligent action. True data-driven organizations embed data thinking into every level of decision-making from boardroom strategy to day-to-day operations. ... A modern data architecture is not a single platform, but an interconnected ecosystem designed to balance agility, governance, and scalability. ... As organizations mature in their data journey, they are moving away from rigid, centralized models that rely on a single source of truth. While centralization once ensured control, it often created bottlenecks slowing down innovation and limiting agility.  ... We are entering an era of data agents self-learning systems capable of autonomously detecting anomalies, assessing risks, and forecasting trends in real time. These intelligent agents will soon become the invisible workforce of the enterprise, operating across domains: predicting supply chain disruptions, optimizing IT performance, personalizing customer journeys, and ensuring compliance through continuous monitoring. Their actions will reshape not only operations but also how organizations think about governance, accountability, and human oversight. For architects, this shift represents both a challenge and an extraordinary opportunity. The role is evolving from that of a data custodian focused on structure and governance to an ecosystem designer who engineers environments where data and AI can coexist, learn, and continuously create value.


10 benefits of an optimized third-party IT services portfolio

By entrusting day-to-day IT operations to trusted providers, organizations can reallocate internal resources toward higher-value initiatives such as digital transformation, automation, and product innovation. This accelerates adoption of emerging technologies, and allows internal teams to deepen business expertise, strengthen cross-functional collaboration, and focus on driving growth where it matters most. ... A well-structured third-party IT services portfolio can provide flexibility to scale up or down based on business needs. This is particularly valuable for CEOs who need to adapt to changing market conditions and seize growth opportunities. Securing talent in the market today is challenging and time consuming, so tapping into the talent pools of your strategic IT services partner base allows organizations to leverage their bench strength to fill immediate needs for talent. ... IT service providers continuously invest in advanced tech and talent development, enabling clients to benefit from cutting-edge innovations without bearing the full cost of adoption. As AI, automation, and cybersecurity evolve, providers offer the subject matter expertise and tools organizations need to stay ahead of disruption. ... With operational stability ensured through a balance of internal talent and trusted third parties, CIOs can dedicate more focus to long-term strategic initiatives that fuel growth and innovation. 


Modernizing SOCs with Agentic AI and Human-in-the-Loop: A Guide to CISOs

Traditional SOCs were not built for today’s speed and scale. Alert fatigue, manual investigations, disconnected tools, and talent shortages all contribute to the operational drag. Many security leaders are stuck in a reactive loop with no clear path to improvement. ... Legacy SOCs rely heavily on outdated technologies and rule-based detection, generating high volumes of alerts, many of which are false positives, leading to analyst burnout. Analysts are compelled to manually inspect and triage a deluge of meaningless signals, making the entire effort unsustainable. ... Before transformation can happen, one needs to understand where one stands. This can be accomplished with key benchmarking metrics for SOC performance, such as MTTD (Mean time to detect), MTTR (Mean time to respond), case closure rates, and tool effectiveness. ... Agentic AI represents the next evolution of AI-powered cybersecurity, which is modular, explainable, and autonomous. Through a coordinated system of AI agents, the Agentic SOC continuously responds and adapts to the evolving security environment in real time. It is designed to accelerate threat detection, investigation, and response by 10x, bringing speed, precision, and clarity to every function of SecOps. Agentic AI is the technology shift that changes the game. Unlike traditional automation, Agentic AI is decision-oriented, self-improving, and always operating with human-in-the-loop for oversight.


3 SOC Challenges You Need to Solve Before 2026

2026 will mark a pivotal shift in cybersecurity. Threat actors are moving from experimenting with AI to making it their primary weapon, using it to scale attacks, automate reconnaissance, and craft hyper-realistic social engineering campaigns. ... Attackers have mastered evasion. ClickFix campaigns trick employees into pasting malicious PowerShell commands by themselves. LOLBins are abused to hide malicious behavior. Multi-stage phishing hides behind QR codes, CAPTCHAs, rewritten URLs, and fake installers. Traditional sandboxes stall because they can't click "Next," solve challenges, or follow human-dependent flows. Result? Low detection rates for the exact threats exploding in 2025 and beyond. ... Thousands of daily alerts, mostly false positives. An average SOC handles 11,000 alerts daily, with only 19% worth investigating, according to the 2024 SANS SOC Survey. Tier 1 analysts drown in noise, escalating everything because they lack context. Every alert becomes a research project. Every investigation starts from zero. Burnout hits hard. Turnover doubles, morale tanks, and real threats hide in the backlog. By 2026, AI-orchestrated attacks will flood systems even faster, turning alert fatigue into a full-blown crisis. ... From a financial leadership perspective, security spending often feels like a black hole: money is spent, but risk reduction is hard to quantify. SOCs are challenged to justify investments, especially when security teams seem to be a cost center without clear profit or business-driving impact.


Digital surveillance tools are reshaping workplace privacy, GAO warns

Privacy concerns intensify when surveillance data feeds into automated systems that evaluate performance, set productivity metrics, or flag workers for potential discipline. GAO found that employers often rely on flawed benchmarks and incomplete measurements. Tools rarely capture the full range of work performed, such as research, mentoring, reading, or off-screen tasks, and frequently misinterpret normal behavior as inefficiency. When employers trust these tools “at face value,” the report notes, workers can be unfairly labeled unproductive or noncompliant despite doing their jobs well. ... Meanwhile, past federal efforts to issue guidance on reducing surveillance related harms such as transparency practices, human oversight, and safeguards against discriminatory impacts have been rescinded or paused since January by the Trump administration as agencies reassess their policy priorities. GAO also notes that existing federal privacy protections are narrow. The Electronic Communications Privacy Act restricts covert interception of communications, but it does not cover most forms of digital monitoring, such as keystroke logging, location tracking, biometric data collection, or algorithmic productivity scoring. ... The report concludes that while digital surveillance can improve safety, efficiency, and health monitoring, its benefits depend wholly on how employers use it.


How to avoid becoming an “AI-first” company with zero real AI usage

A competitor declared they’re going AI-first. Another publishes a case study about replacing support with LLMs. And a third shares a graph showing productivity gains. Within days, boardrooms everywhere start echoing the same message: “We should be doing this. Everyone else already is, and we can’t fall behind.” So the work begins. Then come the task forces, the town halls, the strategy docs and the targets. Teams are asked to contribute initiatives. But if you’ve been through this before, you know there’s often a difference between what companies announce and what they actually do. Because press releases don’t mention the pilots that stall, or the teams that quietly revert to the old way, or even the tools that get used once and abandoned. ... By then, your company’s AI-first mandate will have set into motion departmental initiatives, vendor contracts and maybe even some new hires with “AI” in their titles. The dashboards will be green, and the board deck will have a whole slide on AI. But in the quiet spaces where your actual work happens, what will have meaningfully changed? Maybe you'll be like the teams that never stopped their quiet experiments. ... That’s invisible architecture of genuine progress: Patient, and completely uninterested in performance. It doesn't make for great LinkedIn posts, and it resists grand narratives. But it transforms companies in ways that truly last. Every organization is standing at the same crossroads right now: Look like you’re innovating, or create a culture that fosters real innovation.

Daily Tech Digest - November 25, 2025


Quote for the day:

“Being kind to those who hate you isn’t weakness, it’s a different level of strength.” -- Dr. Jimson S


Invisible battles: How cybersecurity work erodes mental health in silence and what we can do about it

You’re not just solving puzzles. You’re responsible for keeping a digital fortress from collapsing under relentless siege. That kind of pressure reshapes your brain and not in a good way. ... One missed patch. One misconfigured access role. One phishing click. That’s all it takes to trigger a million-dollar disaster or worse: erode trust. You carry that weight. When something goes wrong, the guilt cuts deep. ... The business sees you as the blocker. The board sees you after the breach. And if you’re the lone cyber lead in an SME? You’re on an island, with no lifeboat. No peer to talk to, no outlet to decompress. Just mounting expectations and a growing feeling that nobody really gets what you do. ... The hero narrative still reigns; if you’re not burning out, you’re not trying hard enough. Speak up about being overwhelmed? You risk looking weak. Or worse, replaceable. So you hide it. You overcompensate. And eventually, you break, quietly. ... They expect you to know it all, yesterday. Certifications become survival badges. And with the wrong culture, they become the only form of recognition you get. Systemic chaos builds personal crisis. The toll isn’t abstract. It’s physical, emotional and measurable. ... Cybersecurity professionals are fighting two battles. One is against adversaries. The other is against a system that expects perfection, rewards self-sacrifice and punishes vulnerability.


How to Build Engineering Teams That Drive Outcomes, not Outputs

Aligning teams around clear outcomes reframes what success looks like. They go from saying “this is what we shipped” to “this is what changed” as their role evolves from delivering features to meaningful solutions. ... One way is by changing how teams refer to themselves. This might sound oversimplistic, but a simple shift in team name acts as a constant reminder that their impact is tethered to customer and business outcomes. ... Leaders should treat outcome-based teams as dynamic investments. Rigid predictions are the enemy of innovation. Instead, teams should regularly reevaluate goals, empower adaptation, and allow KPIs to evolve organically from real-world learnings. The desired outcomes don’t necessarily change, but how they are achieved can be fluid. This is how team priorities are defined, new business challenges are solved and evolving customer expectations are met. ... Breaking down engineering silos means reappraising what ownership looks like. If your team’s focus has evolved from “bug fixing” to “continually excellent user experience,” then success is no longer the domain of engineers alone. It’s a collective effort across product, design, and tech — working together as one team. ... Outcome-based teams go beyond a structural change — it’s a mindset shift. By challenging teams to focus on delivering impact, to stay aligned with evolving needs, and to collaborate more effectively, organizations can build durable, customer-centric teams that can grow, adapt, and never sit still.


Guardrails and governance: A CIO’s blueprint for responsible generative and agentic AI

Many in the industry are confusing the function of guardrails and thinking they’re a flimsy substitute for true oversight. This is a critical misconception that must be addressed. Guardrails and governance are not interchangeable; they are two essential parts of a single system of control. ... AI governance is the blueprint and the organization. It’s the framework of policies, roles, committees and processes that define what is acceptable, who is accountable and how you will monitor and audit all AI systems across the enterprise. Governance is the strategy and the chain of command. AI guardrails are the physical controls and the rules in the code. These are the technical mechanisms embedded directly into the AI system’s architecture, APIs and interfaces to enforce the governance policies in real-time. Guardrails are the enforcement layer. ... While we must distinguish between governance and guardrails, the reality of agentic AI has revealed a critical flaw: current soft guardrails are failing catastrophically. These controls are often probabilistic, pattern-based or rely on LLM self-evaluation, which is easily bypassed by an agent’s core capabilities: autonomy and composability. ... Generative AI creates; agentic AI acts. When an autonomous AI agent is making decisions, executing transactions or interacting with customers, the stakes escalate dramatically. Regulators, auditors and even internal stakeholders will demand to know why an agent took a particular action.


Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology

Age gating refers to age-based restrictions on access to online services. Age gating can be required by law or voluntarily imposed as a corporate decision. Age gating does not necessarily refer to any specific technology or manner of enforcement for estimating or verifying a user’s age. ... Age estimation is where things start getting creepy. Instead of asking you directly, the system guesses your age based on data it collects about you. This might include: Analyzing your face through a video selfie or photo; Examining your voice; Looking at your online behavior—what you watch, what you like, what you post; Checking your existing profile data. Companies like Instagram have partnered with services like Yoti to offer facial age estimation. You submit a video selfie, an algorithm analyzes your face, and spits out an estimated age range. Sounds convenient, right? ... Here’s the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don’t know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don’t know that verification systems have error rates. They don’t even seem to understand that the terms they’re using mean different things. The fact that their terminology is all over the place—using “age assurance,” “age verification,” and “age estimation” interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.


Aircraft cabin IoT leaves vendor and passenger data exposed

The cabin network works by having devices send updates to a central system, and other devices are allowed to receive only certain updates. In this system an authorized subscriber is any approved participant on the cabin network, usually a device or a software component that is allowed to receive a certain type of data. The privacy issue begins after the data arrives. Information is protected while it travels, but once it reaches a device that is allowed to read it, that device can view the entire message, including details it does not need for its task. The system controls who receives a message, but it does not control how much those devices can learn from it. The study finds that this creates the biggest risk inside the cabin. Trusted devices have valid credentials and follow all the rules, and they can examine messages closely enough to infer raw sensor readings that were never meant to be exposed. This internal risk matters because it influences how different suppliers share data and trust each other. Someone in the cabin might also try to capture wireless traffic, but the protections on the wireless link prevent them from reading the data as it travels.  ... The researchers found that these raw motion readings can carry extra clues such as small shifts linked to breathing, slight tremors or hints about a person’s body shape. Details like these show why movement data needs protection before it is shared across the cabin network.


Build Resilient cloudops That Shrug Off 99.95% Outages

If a guardrail lives only in a wiki, it’s not a guardrail, it’s an aspiration. We encode risk controls in Terraform so they’re enforced before a resource even exists. Tagging, encryption, backup retention, network egress—these are all policy. We don’t rely on code reviews to catch missing encryption on a bucket; the pipeline fails the plan. That’s how cloudops scales across teams without nag threads. ... Observability isn’t a pile of graphs; it’s a way to answer questions. We want traceability from request to database and back, structured logs that actually structure, and metrics that reflect user experience. ... Most teams benefit from a small set of “stop asking, here it is” dashboards: request volume and latency by endpoint, error rate by version, resource saturation by service, and database health with connection pools and slow query counts. We also wire deploy markers into traces and logs, so “What changed?” doesn’t require Slack archaeology. ... We don’t win medals for shipping fast; we win trust for shipping safely. Progressive delivery lets us test the actual change, in production, on a small slice before we blast everyone. We like canaries and feature flags together: canary catches systemic issues; flags let us disable risky code paths within a version. Every deployment should come with a baked-in rollback that doesn’t require a council meeting. ... Reliability with no cost controls is just a nicer way to miss your margin. We give cost the same respect as latency: we define a monthly budget per product and a change budget per release.


Anatomy of an AI agent knowledge base

“An internal knowledge base is essential for coordinating multiple AI agents,” says James Urquhart, field CTO and technology evangelist at Kamiwaza AI, maker of a distributed AI orchestration platform. “When agents specialize in different roles, they must share context, memory, and observations to act effectively as a collective.” Designed well, a knowledge base ensures agents have access to up-to-date and comprehensive organizational knowledge. Ultimately, this improves the consistency, accuracy, responsiveness, and governance of agentic responses and actions. ... Most knowledge bases include procedures and policies for agents to follow, such as style guides, coding conventions, and compliance rules. They might also document escalation paths, defining how to respond to user inquiries. ... Lastly, persistent memory helps agents retain context across sessions. Access to past prompts, customer interactions, or support tickets helps continuity and improves decision-making, because it enables agents to recognize patterns. But importantly, most experts agree you should make explicit connections between data, instead of just storing raw data chunks. ... At the core of an agentic knowledge base are two main components: an object store and a vector database for embeddings. Whereas a vector database is essential for semantic search, an object store checks multiple boxes for AI workloads: massive scalability without performance bottlenecks, rich metadata for each object, and immutability for auditability and compliance.


Trust, Governance, and AI Decision Making

Issues like bias, privacy, and explainability aren’t just technical problems requiring technical solutions. They have to be understood by everyone in the business. That said, the ideal governance structure depends on each company’s business model. ... The word ethics can feel very far from a developer’s everyday world. It can feel like a philosophical thing, whereas they need to write code and build solutions. Also, many of these issues weren’t part of their academic training, so we have to help them understand. ... Kahneman’s idea is that humans use two different cognitive modes when we make decisions. For everyday decisions and small, familiar problems—like riding a bicycle—we use what he called System One, or Thinking Fast, which is automatic and almost unconscious. In System Two, or Thinking Slow, we have this other way of making decisions that requires a lot of time and attention, either because we are confronted with a problem that’s not familiar to us or because we don’t want to make a mistake. ... We compare Thinking Fast to the data-driven machine learning approach—just give me a lot of data, and I will give you the solution without showing you how I got there or even being able to explain it. Thinking Slow, on the other hand, corresponds to a more traditional, rule-based approach to solving problems. ... It’s similar to what we see with agentic AI systems—the focus is not on any one solver, agent, or tool but rather in the governance of the whole system. 


The Global Race for Digital Trust: Where Does India Stand?

In the modern hyperconnected world, trust has replaced convenience as the true currency of digital engagement. Every transaction, whether on a banking app or an e-governance portal, is based on an unspoken belief: systems are secure and intentions are transparent. Nevertheless, this belief remains under constant pressure. ... India’s digital trust framework is further significantly reinforced with the inauguration of the National Centre for Digital Trust (NCDT) in July 2025. Established by the Ministry of Electronics and Information Technology (MeitY), this Centre serves as the national hub for digital assurance. It unites key elements, including public key infrastructure, authentication as well as post-quantum cryptography under a unified mission. This, in turn, signals the country’s commitment to treating trust as a public good. ... For firms and government agencies alike, compliance signals maturity. It reassures citizens that the systems they rely on, from hospital monitoring networks to smart city command centres, are governed by clear, ethical and verifiable standards. It also encourages global partners that India’s digital infrastructure can operate efficiently throughout jurisdictions. In the long run, this “compliance premium” could well define which countries earn the confidence to lead the global digital economy. ... The world will measure digital strength not by how fast technology advances, but by how deeply trust is embedded within it.


The privacy paradox is turning into a data centre weak point

While consumers’ failure to adopt basic cyber hygiene might seem like a personal problem, it has wide-reaching implications for infrastructure providers. As cloud services, hosted applications and mobile endpoints interact with backend systems, poor user behaviour becomes an attack vector. Insecure credentials, password reuse and unsecured mobile devices all provide potential entry points, especially in hybrid or multi-tenant environments. ... Putting data centres on an equal footing as water, energy and emergency services systems, will mean the data centre sector can now expect greater Government support in anticipating and recording critical incidents. This designation reflects their strategic importance but also brings greater regulatory scrutiny. It also comes against the backdrop of the UK Government’s Cyber Security Breaches Survey in 2024, which reported that 50% of businesses experienced some form of cyber breach in the past 12 months, with phishing accounting for 84% of incidents. This underscores how easily compromised direct or indirect endpoints can threaten core infrastructure. ... The privacy paradox may begin at the consumer level, but its consequences are absorbed by the entire digital ecosystem. Recognising this is the first step. Acting on it through better design, stronger defaults, and user-focused education allows data centre operators to safeguard not just their infrastructure, but the trust that underpins it.