Daily Tech Digest - November 17, 2025


Quote for the day:

"Keep steadily before you the fact that all true success depends at last upon yourself." -- Theodore T. Hunger



You already use a software-only approach to passkey authentication - why that matters

After decades of compromises, exfiltrations, and financial losses resulting from inadequate password hygiene, you'd think that we would have learned by now. However, even after comprehensive cybersecurity training, research shows that 98% of users are still easily tricked into divulging their passwords to threat actors. Realizing that hope -- the hope that users will one day fix their password management habits -- is a futile strategy to mitigate the negative consequences of shared secrets, the tech industry got together to invent a new type of login credential. The passkey doesn't involve a shared secret, nor does it require the discipline or the imagination of the end user. Unfortunately, passkeys are not as simple to put into practice as passwords, which is why a fair amount of education is still required. ... Passkeys still involve a secret. But unlike passwords, users just have no way of sharing it -- not with legitimate relying parties and especially not with threat actors. ... In most situations where users are working with passkeys but not using one of the platform authenticators, they'll most likely be working with a virtual authenticator. These are essentially BYO authenticators, none of which rely on the device's underlying security hardware for any passkey-related public key cryptography or encryption tasks, unlike platform authenticators.


Getting started with agentic AI

A working agentic AI strategy relies on AI agents connected by a metadata layer, whereby people understand where and when to delegate certain decisions to the AI or pass work to external contractors. It’s a focus on defining the role of the AI and where people involved in the workflow need to contribute. ... Data lineage tracking should happen at the code level through metadata propagation systems that tag every data transformation, model inference and decision point with unique identifiers. Willson says this creates an immutable audit trail that regulatory frameworks increasingly demand. According to Willson, advanced implementations may use blockchain-like append-only logs to ensure governance data cannot be retroactively modified. ... One of the areas IT leaders need to consider is that their organisation will more than likely rely on a number of AI models to support agentic AI workflows.  ... Organisations need to have the right data strategy in place, and they should already be well ahead on their path to full digitisation, where automation through RPA is being used to connect many disparate workflows. Agentic AI is the next stage of this automation, where an AI is tasked with making decisions in a way that would have previously been too clunky using RPA. However, automation of workflows and business processes are just pieces of an overall jigsaw. 


Human-centric IAM is failing: Agentic AI requires a new identity control plane

Agentic AI does not just use software; it behaves like a user. It authenticates to systems, assumes roles and calls APIs. If you treat these agents as mere features of an application, you invite invisible privilege creep and untraceable actions. A single over-permissioned agent can exfiltrate data or trigger erroneous business processes at machine speed, with no one the wiser until it is too late. The static nature of legacy IAM is the core vulnerability. You cannot pre-define a fixed role for an agent whose tasks and required data access might change daily. The only way to keep access decisions accurate is to move policy enforcement from a one-time grant to a continuous, runtime evaluation. ... Securing this new workforce requires a shift in mindset. Each AI agent must be treated as a first-class citizen within your identity ecosystem. First, every agent needs a unique, verifiable identity. This is not just a technical ID; it must be linked to a human owner, a specific business use case and a software bill of materials (SBOM). The era of shared service accounts is over; they are the equivalent of giving a master key to a faceless crowd. Second, replace set-and-forget roles with session-based, risk-aware permissions. Access should be granted just in time, scoped to the immediate task and the minimum necessary dataset, then automatically revoked when the job is complete. Think of it as giving an agent a key to a single room for one meeting, not the master key to the entire building.


Don’t ignore the security risks of agentic AI

We need policy engines that understand intent, monitor behavioral drift and can detect when an agent begins to act out of character. We need developers to implement fine-grained scopes for what agents can do, limiting not just which tools they use, but how, when and under what conditions. Auditability is also critical. Many of today’s AI agents operate in ephemeral runtime environments with little to no traceability. If an agent makes a flawed decision, there’s often no clear log of its thought process, actions or triggers. That lack of forensic clarity is a nightmare for security teams. In at least some cases, models resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals—including blackmailing officials and leaking sensitive information to competitors Finally, we need robust testing frameworks that simulate adversarial inputs in agentic workflows. Penetration-testing a chatbot is one thing; evaluating an autonomous agent that can trigger real-world actions is a completely different challenge. It requires scenario-based simulations, sandboxed deployments and real-time anomaly detection. ... Until security is baked into the development lifecycle of agentic AI, rather than being patched on afterward, we risk repeating the same mistakes we made during the early days of cloud computing: excessive trust in automation before building resilient guardrails.


How Technological Continuity and High Availability Strengthen IT Resilience in Critical Sectors

Within the context of business continuity, high availability ensures technology supports the organization’s ability to operate without disruption. It minimizes downtime and maintains the confidentiality, integrity, and availability of information. ... To achieve true high availability, organizations implement architectures that combine redundancy, automation, and fault tolerance. Database replication whether synchronous or asynchronous allows data to be duplicated across primary and secondary nodes, ensuring continuous access in the event of a failure. Synchronous replication guarantees data consistency but introduces latency, while asynchronous models reduce latency at the expense of a small data gap. Both approaches, when properly configured, strengthen the integrity and continuity of critical databases. ... One of the most effective strategies to reduce technological dependence is the implementation of hybrid continuity models that integrate both on-premises and cloud environments. Organizations that rely exclusively on a single cloud service provider expose themselves to the risk of total outage if that provider experiences downtime or disruption. By maintaining mirrored environments between cloud infrastructure and local servers, it is possible to achieve operational flexibility and independence across channels.


The tech that turns supply chains from brittle to unbreakable

When organizations begin crafting a supply chain strategy, one of the most common misconceptions is viewing it as purely a logistics exercise rather than a holistic framework that spans procurement, planning and risk management. Another frequent misstep is underestimating the role of technology. Digital tools are essential for visibility, predictive analytics and automation, not optional. Equally critical is recognizing that strategy is not static, it must evolve continuously to address shifting market conditions and emerging threats. ... Resilience comes from treating cyber and physical risks as one integrated challenge. That means embedding security into every layer of the supply chain, from vendor onboarding to logistics execution, while leveraging advanced visibility tools and zero trust principles. ... Executive buy‑in for resilience investments begins with reframing the conversation from cost to value. We position resilience as a strategic enabler rather than an expense by linking it to business continuity, customer trust and competitive advantage. Instead of focusing solely on immediate ROI, emphasize measurable risk reduction, regulatory compliance and the cost of inaction during disruptions. Use real‑world scenarios and data to show how resilience safeguards revenue streams and accelerates recovery when crises hit. Engage executives early, align initiatives with corporate objectives and present resilience as a driver of long‑term growth and brand reputation.


ISO and ISMS: 9 reasons security certifications go wrong

Without management’s commitment, it’s often difficult to get all employees on board and ensure that ISO standards, or even IT baseline protection standards, are integrated into daily business operations. As a result, companies should provide top-down clarity about the importance of such initiatives — even if implementation can be costly and inconvenient. “Cleaning up” isn’t always pleasant, but the result is all the more worthwhile. ... Without genuine integration into daily operations, the certification becomes useless, and the benefits it offers remain unrealized. In the worst-case scenario, organizations even end up losing money, while also missing out on the implementation’s potential value. When integrating a management system, it’s important not to get bogged down in details. The practical application of the system in real-world work situations is crucial for its success. ... Employees need to understand why the implementation is important, how it will be integrated into their daily workflows, and how it will make their work easier. If this isn’t the case, it will be difficult to implement the system and maintain any resulting certification. ... Without a detailed plan, companies focus on areas that are irrelevant or do not meet the requirements of the ISO/IT baseline protection standards. Furthermore, if the implementation of a management system takes too long, regular business development can overtake the process itself, resulting in duplicated work to keep up with changes.


State of the API 2025: API Strategy Is Becoming AI Strategy

What distinguishes fully API-first teams? They treat APIs as long-lived products with roadmaps, SLAs, versioning, and feedback loops. They align product and engineering early, embed governance into workflows, and standardize patterns so that consumers, human or agent, can rely on consistent contracts. In our experience, that "productization" of APIs is what unlocks long-lived, reusable APIs and parallel delivery. When your agents can trust your schemas, error semantics, and rate-limit behaviors, they can compose capabilities far faster than code-level abstractions ever could. ... As AI agents become primary API consumers, security assumptions must evolve. 51% of developers cite unauthorized or excessive agent calls as a top concern; 49% worry about AI systems accessing sensitive data they shouldn't; and 46% highlight the risk of credential leakage and over-scoped keys. Traditional controls, designed for predictable human traffic, struggle against machine-speed persistence, long-running automation, and credential amplification. ... Even as API-first adoption grows, collaboration remains a bottleneck. 93% of teams report challenges such as inconsistent documentation, duplicated work, and difficulty discovering existing APIs. With 69% of respondents spending 10+ hours per week on API-related tasks, and with a global workforce, asynchronous collaboration is the norm. 


Embedded Intelligence: JK Tyre's Smart Tyre Use Case

Unlike traditional valve-mounted tire pressure monitoring devices, or TPMS, these sensors are permanently integrated for consistent data accuracy. Each chip is designed to last five to seven years, depending on usage and conditions. "These sensors are permanently embedded during the assembly process," said V.K. Misra, technical director at JK Tyre. "They continuously send live data on air pressure and temperature to the dashboard and mobile device. The moment there's a variation, the driver is alerted before a small problem becomes a serious risk." ... The embedded version takes this further by integrating the chip within the tire's internal structure, creating a closed feedback loop between the tire, the driver and the cloud. "We have created an entire connected ecosystem," Misra said. "The tire is just the beginning. The data generated feeds predictive models for maintenance and safety. Through Treel, our platform can now talk to vehicles, drivers and service networks simultaneously." The Treel platform processes sensor data through APIs and cloud analytics, providing actionable insights for drivers and fleet operators. Over time, this data contributes to predictive maintenance models, product design improvements and operational analytics for connected vehicles. ... "AI allows decisions that earlier took days to happen within minutes," Misra said. "It also provides valuable data on wear patterns and helps us improve quality control across plants."


Regulation gives structure and voice to security leaders: Darshan Chavan

Chavan has witnessed a remarkable shift over the past decade in how businesses view cybersecurity. ... The increased visibility of cybersecurity, he says, has given CISOs a strategic voice. “Frequent regulatory updates, data breaches in the news, and rising public awareness have made organisations realize that cybersecurity is fundamental to business continuity,” he explains. “Every organisation now understands that to operate in a fast-evolving digital landscape, you need a cybersecurity leader with authority — and frameworks, regulations, and policies that are implemented and accepted by the business.” He views cybersecurity guidelines — whether from SEBI, RBI, or other regulatory bodies — as empowering rather than restrictive. “Regulation gives structure and voice to security leaders,” he says. “It ensures that cybersecurity is treated not as a cost centre but as a core enabler of business trust.” ... While he acknowledges that the DPDP Act will help formalise this journey, he refuses to wait for regulation to act. “I’m not waiting for the law to push me,” he says. “Tomorrow, investors will start asking how we manage their data, how we protect their bank account numbers, and how we ensure confidentiality. I want to be ready before those questions arise.” Beyond data privacy, Chavan highlights network defense and layered security as ongoing imperatives. “

Daily Tech Digest - November 16, 2025


Quote for the day:

"Life is 10% what happens to me and 90% of how I react to it." -- Charles Swindoll


Hybrid AI: The future of certifiable and trustworthy intelligence

An emerging approach in AI innovation is hybrid AI, which combines the scalability of machine learning (ML) with the constraint-checking and provenance of symbolic models. Hybrid AI forms a foundation for system-level certification and helps CIOs balance the pursuit of performance with the need for accountability. ... Clustering, a core unsupervised learning technique, organizes unlabeled data into groups based on similarity. It’s widely used to segment customers, group documents or analyze sensor data by measuring distances in a numeric feature space. But conventional clustering works on similarity alone and has no grasp of meaning. This can group items by coincidence rather than concept. ... For enterprise leaders, verifiability isn’t optional; it’s a governance requirement. Systems that support strategic or regulatory decisions must show constraint conformance and leave a traceable decision path. Ontology-driven clustering provides that foundation, creating an auditable chain of logic aligned with frameworks such as the NIST AI Risk Management Framework. In both government and industry, this hybrid approach makes AI more accountable and reliable. Trustworthiness is not a checkbox but an assurance case that connects data science, compliance and oversight. An organization that cannot trace what was allowed into a model or which constraints were applied does not truly control the decision.


Upwork study shows AI agents excel with human partners but fail independently

The research challenges both the hype around fully autonomous AI agents and fears that such technology will imminently replace knowledge workers. "AI agents aren't that agentic, meaning they aren't that good," Andrew Rabinovich, Upwork's chief technology officer and head of AI and machine learning, said in an exclusive interview with VentureBeat. "However, when paired with expert human professionals, project completion rates improve dramatically, supporting our firm belief that the future of work will be defined by humans and AI collaborating to get more work done, with human intuition and domain expertise playing a critical role." ... The research reveals stark differences in how AI agents perform with and without human guidance across different types of work. For data science and analytics projects, Claude Sonnet 4 achieved a 64% completion rate working alone but jumped to 93% after receiving feedback from a human expert. In sales and marketing work, Gemini 2.5 Pro's completion rate rose from 17% independently to 31% with human input. OpenAI's GPT-5 showed similarly dramatic improvements in engineering and architecture tasks, climbing from 30% to 50% completion. The pattern held across virtually all categories, with agents responding particularly well to human feedback on qualitative, creative work requiring editorial judgment — areas like writing, translation, and marketing — where completion rates increased by up to 17 percentage points per feedback cycle.


Debunking AI Security Myths for State and Local Governments

As state and local governments adopt AI, they must return to cybersecurity basics and strengthen core principles to help build resilience and earn public trust. For AI workloads, governments should apply zero-trust principles; for example, continuously verifying identities, limiting access by role and segmenting system components. Clear data policies for access, protection and backups help safeguard sensitive information and keep systems resilient. Perhaps most important, security teams need to be involved early in AI design conversations to build in security from the start. ... As state and local governments deploy more sophisticated AI systems, it’s crucial to view the technology as a partner, not a replacement for human intelligence. There is a misconception that advanced AI — particularly agentic AI, which can make its own decisions — eliminates the need for human oversight. The truth is, responsible AI deployment hinges on human oversight and strong governance. The more autonomous an AI system becomes, the more essential human governance is. ... Securing AI is not a one-time milestone. It’s an ongoing process of preparation and adaptation as the threat landscape evolves. For state and local governments advancing their AI initiatives, the path forward centers on building resilience and confidence. And the good news is, they don’t need to start from scratch. The tools and strategies already exist.


When Open Source Meets Enterprise: A Fragile Alliance

The answer is by no means simple; it is determined by a number of factors, of which the vendor’s ethos is one of the most important. Some vendors genuinely give back to the open-source communities from which they gain value. Others are more extractive, building closed proprietary layers atop open foundations and pushing little back to the community. The difference matters enormously. Organisations hold true optionality when a vendor actively maintains the open-source core, while keeping its proprietary features genuinely additive rather than substitutive. In theory, they could shift to another provider or take the open-source components in-house should the relationship sour. ... Commercial open-source vendors can provide training, certification, and managed services to fill this gap, for a fee naturally. Then there is innovation velocity. Open-source communities can move incredibly quickly, with contributions from numerous sources, enabling organisations to adopt cutting-edge features faster than conventional enterprise procurement cycles allow. Conversely, vital security patches can stall if a project lacks maintainers, creating unacceptable exposure for risk-averse organisations. ... Ultimately, the question is not whether open source should exist within the enterprise; that debate has been resolved. The challenge lies in thoughtfully incorporating open-source components into broader technology strategies that balance innovation, resilience, sovereignty, and pragmatic risk management.


The Hidden Cost of Technical Debt in Databases

At its core, technical debt represents the trade-off between speed and quality. When a development team chooses a “quick and dirty” path to meet a deadline, debt is incurred. The database world sees the same phenomenon. ... The first step to eliminating technical debt is recognition. DBAs must adopt a mindset that managing technical debt is part of the job. Although it can be enticing to quickly fix a problem and move on, it should always be a part of the job to reflect on the potential future impact of any change that is made. ... Importantly, DBAs also sit at the crossroads between technical staff and business stakeholders. They can explain how technical debt translates into business impact: lost productivity, slower application delivery, higher infrastructure costs, and greater operational risk. This ability to connect database health to business outcomes is essential for winning support to tackle debt. In practice, the DBA’s role involves three things: identification, communication, and advocacy. DBAs must identify where debt exists, communicate its impact clearly, and advocate for resources to remediate it. Sometimes that means lobbying for time to redesign a schema, other times it means convincing leadership that archiving inactive data will save more money than buying new storage. Yet other times it may involve championing a new tool or process to be put in place to automate required tasks to thwart technical debt.


Seek Skills, Not Titles

Titles feel good—at first. They make your resume and LinkedIn profile look prettier. But when you confuse your title for your identity, you’re setting yourself up for a rude awakening. Titles can be taken away. Or they just expire, like milk in the back of the fridge. Your skills, on the other hand? No one can take those away from you. ... Some roles taught me how to work hard and build trust. Some taught me to communicate clearly and adapt quickly. Others taught me to see the big picture and act decisively. The titles didn’t teach me those skills; the experience did. ... It’s easy to let your job title become your identity, especially when you’re leading at a high level. Everyone wants something from you. Board members, investors, employees. They project their version of who they think you should be. You must have clarity on your core values. Not the company’s core values, but your own. Otherwise, you’ll find yourself playing a dozen different roles without knowing which one is actually you. ... Don’t wait for the title to teach you a skill. Start now. The best way to grow is to pursue skills that will open up opportunities, especially the ones that align with your personal values. Because when your values and skills match, your impact multiplies, regardless of the title. When has pursuing a title led you away from the skills you truly needed? What impact have you seen when your skills are aligned with your values? How might you need to detour to get back on the right track?


Strategic Autarky for the AI Age

AI is still emerging. Overspecifying rules, enforcing rigid certification pathways, or creating sector wise chokepoints too early can stifle the very innovation we aim to promote. Burdensome compliance layers, mandated algorithmic disclosures, prescriptive model testing protocols, and fragmented approval processes can all create friction. Overregulation can discourage experimentation, elevate the cost of market entry, and drain our fastest growing startups. The risk is simple. Innovation flight. Loss of competitive edge. A domestic ecosystem slowed down before it reaches maturity. Balancing sovereignty and innovation, therefore, becomes the central task. India cannot afford to remain dependent, but it also cannot smother its own technological growth. India’s new AI Governance Framework addresses this balance directly. It follows seven guiding principles built around trust, accountability, transparency, privacy, security, human centricity, and collaboration. The standout feature is its “light touch” approach. Instead of imposing rigid controls, the framework sets high level principles that can evolve with technology. It relies on India’s existing legal foundation, including the Digital Personal Data Protection Act and the Information Technology Act, and is supported by institutional structures like the AI Governance Group and the AI Safety Institute. The framework contains several strong provisions. It encourages voluntary risk assessments rather than mandatory rigid audits for most systems.


Google Brain founder Andrew Ng thinks you should still learn to code - here's why

"Because AI coding has lowered the bar to entry so much, I hope we can encourage everyone to learn to code -- not just software engineers," Ng said during his keynote. How AI will impact jobs and the future of work is still unfolding. Regardless, Ng told ZDNET in an interview that he thinks everyone should know the basics of how to use AI to code, equivalent to knowing "a little bit of math," -- still a hard skill, but applied more generally to many careers for whatever you may need. "One of the most important skills of the future is the ability to tell a computer exactly what you want it to do for you," he said, noting that everyone should know enough to speak a computer's language, without needing to write code yourself. "Syntax, the arcane incantations we use, that's less important." ... The new challenge for developers, Ng said during the panel, will be coming up with the concept of what they want. Hedin agreed, adding that if AI is doing the coding in the future, developers should focus on their intuition when building a product or tool. "The thing that AI will be worst at is understanding humans," he said. ... He cited the overhiring sprees tech companies went on -- and then ultimately reversed -- during the COVID-19 pandemic as the primary reason entry-level coding jobs are hard to come by. Beyond that, though, it's a question of grads having the right kind of coding skills.


How Development Teams Are Rethinking the Way They Build Software

While low-code/no-code platforms accelerate development, they can become challenging when trying to achieve high levels of customization or when dealing with complex systems. Custom solutions might be more cost-effective for highly specialized applications. Low-code and no-code platforms must provide clear guidance to users within a structured framework to minimize mistakes, and they may offer less flexibility compared to traditional coding. AI tools can be easily used to generate code, suggest optimizations, or even create entire applications based on natural language prompts. However, they work best when integrated into a broader development ecosystem, not as standalone solutions. ... The future of software development appears to be a blended approach, where traditional programming, low-code/no-code platforms, and AI each play a role. The key to success in this dynamic landscape is understanding when to use each method, ensuring C-level executives, team leaders, and team members are versatile and leverage technology to enhance, rather than replace, human ingenuity. Let me share my firsthand experience. When I asked my developers a year ago how they thought using AI tools at work would evolve, many said: “I expect that as the tools improve, I’ll shift from mostly writing code to mostly reviewing AI-generated code.” Fast forward a year, and when we posed the same question, a common theme emerged: “We are spending less time writing the mundane stuff.”


Businesses must bolster cyber resilience, now more than ever

Cyber upskilling must be built into daily work for both technical and non-technical employees. It’s not a one-off training exercise; it’s part of how people perform their roles confidently and securely. For technical teams, staying current on certifications and practising hands-on defence is essential. Labs and sandboxes that simulate real-world attacks give them the experience needed to respond effectively when incidents happen. For everyone else, the focus should be on clarity and relevance. Employees need to understand exactly what’s expected of them; how their individual decisions contribute to the organisation’s resilience. ... Boards aren’t expected to manage technical defences, but they are responsible for ensuring the organisation can withstand, recover from, and learn after a cyber disruption. Cyber incidents have evolved into full business continuity events, affecting operations, supply chains, and reputation. Resilience should now sit alongside financial performance and sustainability as a core board KPI. That means directors receiving regular updates not only on threat trends and audit findings, but also on recovery readiness, incident transparency, and the cultural maturity of the organisation’s response. Re-engaging boards on this agenda isn’t about assigning blame—it’s about enabling smarter oversight. When leaders understand how resilience protects trust, continuity, and brand, cybersecurity stops being a technical issue and becomes what it truly is: a measure of business strength.

Daily Tech Digest - November 15, 2025


Quote for the day:

“Be content to act, and leave the talking to others.” -- Baltasa



Why engineering culture should be your top priority, not your last

Most engineering leaders treat culture like an HR checkbox, something to address after the roadmap is set and the features are prioritized. That’s backwards. Culture directly affects how fast your team ships code, how often bugs make it to production, and whether your best developers are still around when the next major project kicks off. ... Many engineering leaders are Boomers or Gen X. They built their careers in environments where you kept your head down, shipped your code, and assumed no news was good news. That approach worked for them. It doesn’t work for the developers they’re managing now. This creates a perception problem that compounds the engagement gap. Most C-suite leaders say they put employee well-being first. Most employees don’t see it that way. Only 60% agree their employer actually prioritizes their well-being. The gap matters because employees who think their company cares more about output than people feel overwhelmed nearly three-quarters of the time. When employees feel supported, that number drops to just over half. That difference is where attrition starts. ... Most engineering teams try to fix retention with the same approach that worked decades ago, when people stayed at companies for years and stability mattered more than engagement. That’s not how careers work anymore. The typical response is to roll out generic culture programs designed for large enterprises. 


Integrated deployment must become the default

It’s intuitive that off-site and modular construction models reduce on-site build timelines in general construction, but we are observing the benefits within the data center space being amplified due to the increased density of services catering to larger rack loads. One of the main deterrents to modular adoption has been the perception of limited scalability and design repetition, combined with the inefficiency of transporting large volumes of unused space, essentially “shipping air.” As a result, traditional stick-build methods have long remained the default approach. But that’s all changing. The services, be it telecom, electrical, or cooling, are getting bigger, heavier, and more densely packed, and the timeframe needed is being whittled down, so naturally the emphasis has moved towards fully integrated solutions. These systems are assembled and commissioned offsite wherever possible, then delivered ready for installation with minimal site work required. Offsite integration also negates a lot of the complexities of trade-to-trade sequencing and handover of areas, which absorb site resources and hinder programme delivery. When systems arrive pre-aligned, factory-tested, and installation-ready on-site, activity shifts from coordination and correction to simple assembly. The cumulative impact is significant: reduced project timelines, fewer site dependencies, and greater confidence in delivery schedules.


The Myth Of Executive Alignment: Why Top Teams Need Honesty, Not Harmony

The idea that executive teams should think alike is comforting but unrealistic. Direction needs coherence, but total agreement usually means someone stopped speaking up. Lencioni has said that real clarity can’t be manufactured through slogans or slide decks. “Alignment and clarity,” he wrote, “cannot be achieved in one fell swoop with a series of generic buzzwords and aspirational phrases crammed together.” The strongest teams I’ve seen operate through visible, respected tension. Finance pushes for discipline. Strategy pushes for expansion. Risk pushes for protection. Culture pushes for capacity. Together they form an internal ecosystem of checks and balances. Call it necessary misalignment or structured divergence—it’s what keeps a company honest. The work isn’t to erase difference but to make it safe. ... Executive behavior multiplies downward. When the top team loses coherence, the entire system learns to mimic its caution. Lencioni has often written that when trust is strong, conflict transforms. “When there is trust,” he explained, “conflict becomes nothing but the pursuit of truth.” And the reward for that truth, he reminds us, is organizational health. “The single greatest advantage any company can achieve,” Lencioni wrote, “is organizational health.” Those two ideas—truth and health—connect directly with Gallup’s research. They’re not soft metrics; they’re what make trust and accountability visible.


Why Cybersecurity Jobs Are Likely To Resist AI Layoff Pressures: Experts

The bottom line is that there will “always” be a need for a significant number of cybersecurity professionals, Edross said. “I do not believe this technology will ever make the human obsolete.” The notion that SOC analyst jobs and other roles requiring security expertise might be at risk would have been unthinkable just a few years ago — making the sudden shift to discussions around AI-driven redundancy for humans in the SOC all the more startling. “If you go back about two years ago, there’s this constant hum in the industry that we have a few million less cybersecurity professionals than we need,” Palo Alto Networks CEO Nikesh Arora said. ... “AI still has a significant propensity to make mistakes, which in the security world is quite problematic,” said Boaz Gelbord, senior vice president and chief security officer of Akamai. “So you’re always going to need a human check on that.” At the same time, human orchestration of the AI systems will be an ongoing necessity as well, according to experts. “You need that creativity. You need to understand and piece together and review the LLM’s work,” said Dov Yoran, co-founder and CEO of Command Zero, a startup offering an LLM-powered cyber investigation platform. “I don’t see how the human goes away.” And while entry-level security analysts may find parts of their roles becoming redundant due to AI, most organizations will want to continue employing them, if only to prepare them to become higher-tier analysts over time, Yoran said.


MCP doesn’t move data. It moves trust

Many assume MCP will replace APIs, but it can’t and shouldn’t. MCP defines how AI models can safely call tools; APIs remain the mechanisms that connect those tools to the real world. Without APIs, an MCP-enabled AI can think, reason and recommend, but it can’t act. Without MCP, those same APIs remain open highways with no traffic rules. Autonomy requires both. MCP will give rise to a new class of enterprise software: AI control planes that sit between reasoning and execution. These systems will combine access policy, auditing, explainability and version control — the governance scaffolding for safe autonomy. But governance alone isn’t enough. Logging requests does not make them effective. Without APIs, MCP remains a supervisory layer, not an operational one. The future belongs to systems that can both decide responsibly and act reliably. ... MCP will not eliminate complexity. It will simply move it — from data management to decision management. The challenge ahead is to make that complexity visible, traceable and accountable. In enterprise AI, the real challenge is no longer technical feasibility; it’s moral architecture. The question is shifting from what AI can do to what it should be allowed to do. ... MCP represents the architecture of restraint, a new language of control between reasoning and reality. APIs will keep moving data. MCP will govern how intelligence uses it. And when those two layers work in harmony, enterprises will finally move from systems that record what happened to systems that make things happen.


AI Copilots for Good Governance and Efficient Public Service Delivery

While AI copilots hold immense potential for public service delivery, several challenges must be addressed before large-scale adoption can be facilitated in India. While India’s digital and policy landscape provides fertile ground for AI copilots, several challenges need to be addressed to ensure their responsible and effective adoption. One of the foremost concerns is data privacy and security. Copilots in governance will inevitably process large volumes of sensitive personal and financial data from citizens and businesses. Without adequate safeguards, this raises risks of misuse, unauthorised access, or surveillance overreach. The Digital Personal Data Protection Act, 2023, establishes a strong legal framework for data fiduciaries. Yet, its principles must be operationalised through privacy-preserving sandboxes, anonymised training datasets, and clear consent mechanisms tailored for AI-driven interfaces. ... Equally pressing is the challenge of algorithmic bias and fairness. AI copilots, if trained on unbalanced or non-representative datasets, can perpetuate linguistic, gender, or regional biases, disadvantaging marginalised users. To prevent such inequities, India’s AI governance could mandate fairness audits, algorithmic transparency, and explainability in all government-deployed copilots. This may be complemented by inclusive design standards that ensure accessibility across India’s diverse languages and digital contexts. 


Fighting AI with AI: Adversarial bots vs. autonomous threat hunters

Attackers already have systemic advantages that AI amplifies dramatically. While there are some great examples of how AI can be used for defense, these methods, if used against us, could be devastating. ... It’s hard to gain context at that scale. Most companies have multiple defensive layers — and they all have flaws. Using weaknesses in those layers, attackers weave through them and create attack paths. The question is: How are we finding those paths before they do? ... The use of AI bots within a digital twin enables continuous, multi-threaded threat hunting and attack path validation without impacting production environments. This addresses the prioritization challenges that security and IT teams struggle with in a meaningful way. Really, digital twins offer the same benefits to security teams as physical twins provided to NASA scientists more than 55 years ago: accurate simulations of how a given change might impact large, complex and highly dynamic attack surfaces. Plus, it’s exciting to imagine how the UX might evolve to help defenders visualize what’s happening in unprecedented ways. ... AI is a truly transformational technology and it’s exciting to think about how AI defense can evolve over the next few years. I encourage product builders to think big. Why not draw inspiration from science fiction? 


AI is shaking up IT work, careers, and businesses - and here's how to prepare

"AI opened a whole new can of worms for security," said Tsai. "Overall, the demand for IT jobs is going to increase at three times the rate of all jobs." This generally presents a positive outlook for the IT industry, but it's also fueling a shift in how companies conduct hiring and what they are looking for. Spiceworks previewed its 2026 State of IT report, a survey that gathers insights from over 800 IT professionals at small and medium-sized companies on current trends, and found that the skills most in demand are reflecting the growth of AI. ... "If you are in IT, perhaps upleveling your skills, learning about AI is a very smart thing to do now. It can make you very productive, and it can help you do more or less," said Tsai. Taking it upon yourself to do this work is especially important because, as I cited during the panel, companies are investing a lot of money into AI solutions, but training is increasingly left behind or not prioritized. ... "When it comes to AI, whether it is bringing in completely and maybe doing a small language model to AI, or doing inferencing, or you can run many of the LLMs internally," said Rapozza. "Businesses are building up your construction to support those kinds of things." Does this level of investment mean companies are seeing an immediate ROI? Not exactly, but there is progress being made in that direction. As Rodrigo Gazzaneo, senior GTM Specialist, generative AI, Amazon Web Services (AWS), noted, companies are already seeing positive outcomes.


A developer’s Hippocratic Oath: Prioritizing quality and security with the fast pace of AI-generated coding

In the context of the medical field, physicians are taught ‘do no harm,’ and what that means is their highest duty of care is to make sure that the patient is first, and that they do not conduct any sort of treatments on the patient without first validating that that’s what’s best for the patient, ... The responsibility for software engineers is similar; When they’re asked to make a change to the codebase, they need to first understand what they’re being asked to do and make sure that’s the best course of action for the codebase. “We’re inundated with requests,” Johnson said. “Product managers, business partners, customers are demanding that we make changes to applications, and that’s our job, right? It’s our job to build things that provide humanity and our customers and our businesses value, but we have to understand what is the impact of that change. How is it going to impact other systems? Is it going to be secure? Is it going to be maintainable? Is it going to be performant? Is it ultimately going to help the customer?” ... “We all love speed, right? But faster coding is not actually producing a high quality product being shipped. In fact, we’re seeing bottlenecks and lower quality code.” He went on to say that testing is the discipline that could be most transformed by generative AI. It is really good at studying the code and determining what tests you’re missing and how to improve test coverage.


API Key Security: 7 Enterprise-Proven Methods to Prevent Costly Data Breaches

To prevent API keys from leaking, the first and foremost rule is, as you guessed, never store them in the code. Embedding API keys directly in client-side code or committing them to version control systems is, no doubt, a recipe for disaster: Anyone who can access the code or the repository can steal the keys. ... Implementing an API key storage system? Out of the question, because securely storing and managing API keys bring tremendous operational overhead, like storage overhead, management overhead, usage overhead, and distribution overhead. ... API Gateways, like AWS API Gateway, Kong, etc., are designed to solve these problems, simplifying and centralizing the management of all APIs, providing a single entry point for all requests. Features like limiting, throttling, and DDoS protection are baked in; API gateways can also provide centralized logging and monitoring; they even provide more features like input validation, data masking, and response filtering. ... All the above practices enhance API security in either the usage/storage or production environment, but there is another area where API keys could be compromised: the continuous integration/continuous deployment systems and pipelines. By nature, CI/CD involves running automation scripts and executing commands in a non-interactive way, which sometimes requires API keys, and this means the keys need to be stored somewhere and passed to the pipelines at runtime.

Daily Tech Digest - November 14, 2025


Quote for the day:

"The only way to achieve the impossible is to believe it is possible." -- Charles Kingsleigh



When will browser agents do real work?

Vision-based agents treat the browser as a visual canvas. They look at screenshots, interpret them using multimodal models, and output low-level actions like “click (210,260)” or “type “Peter Pan”.” This mimics how a human would use a computer—reading visible text, locating buttons visually, and clicking where needed. ... DOM-based agents, by contrast, operate directly on the Document Object Model (DOM), the structured tree that defines every webpage. Instead of interpreting pixels, they reason over textual representations of the page: element tags, attributes, ARIA roles, and labels. ... Running a browser agent once successfully doesn’t mean it can repeat the task reliably. The next frontier is learning from exploration: transforming first-time behaviors into reusable automations. A promising strategy starting to be deployed more and more is to let agents explore workflows visually, then encode those paths into structured representations like DOM selectors or code. ... With new large language models excelling at writing and editing code, these agents can self-generate and improve their own scripts, creating a cycle of self-optimization. Over time, the system becomes similar to a skilled worker: slower on the first task, but exponentially faster on repeat executions. This hybrid, self-improving approach—combining vision, structure, and code synthesis—is what makes browser automation increasingly robust. 


Security Degradation in AI-Generated Code: A Threat Vector CISOs Can’t Ignore

LLMs have been a boon for developers since OpenAI’s ChatGPT was publicly released in November 2022, followed by other AI models. Developers were quick to utilize the tools, which significantly increased productivity for overtaxed development teams. However, that productivity boost came with security concerns, such as AI models trained on flawed code from internal or publicly available repositories. Those models introduced vulnerabilities that sometimes spread throughout the entire software ecosystem. One way to address the problem was by using LLMs to make iterative improvements to code-level security during the development process, under the assumption that LLMs, given the job of correcting mistakes, would amend them. The study, however, turns that assumption on its head. Although previous studies (and extensive real-world experience, including our own data) have demonstrated that an LLM can introduce vulnerabilities in the code it generates, this study went a step further, finding that iterative refinement of code can introduce new errors. ... The security degradation introduced in the feedback loop raises troubling questions for developers, tool designers and AI safety researchers. The answer to those questions, the authors write, involves human intervention. Developers, for instance, must maintain control of the development process, viewing AI as a collaborative assistant rather than an autonomous tool.


Are We in the Quantum Decade?

It would be prohibitively expensive even for a Fortune 100 company to own, operate and maintain its own quantum computer. It would require a quantum ecosystem that includes government, academia and industry entities to make it accessible to an enterprise. In most cases, the push and funding could come from the government or through cooperation among nations. Historically, new computing technology was rented and used as a service. Compute resources financed by government were booked in advance. Processing occurred in batches using resource-sharing techniques such as time slicing. Equivalent models are expected for quantum processing. ... The era of quantum computing looms large, but enterprises and IT teams should be thinking about it today. Infrastructure needs to be deployed and algorithms need to be written for executing business use cases. "For several years to come, CIOs may not have much to do with quantum computing. But they need to know what it is, what it can do and how much it costs," said Lawrence Gasman, president of Communications Industry Researchers. "Quantum networks and cybersecurity will become necessary for secure communications by 2030 or even earlier." Quantum computing will not replace classical computing, but data center providers need to be thinking about how they will integrate the two architectures using interconnects like co-packaged optics.


When Data Gravity Meets Disaster Recovery

Data starts to pull everything else toward it: apps, analytics, integrations, even people and processes, the more it aggregates in one place. That environment becomes a tightly woven web of dependencies, over time. While it may be fine for day-to-day operations, it becomes a nightmare when something breaks. At that point, DR turns into a delicate task of relocating an entire ecosystem, not just a matter of simply copying files. You have to think about relationships, which systems rely on which datasets, how permissions are mapped, and how applications expect to find what they need. Of course, the bigger that web gets, the heavier the “gravitational field.” Moving petabytes of interconnected data across regions or clouds isn’t fast or easy. It takes time, bandwidth, and planning, and every extra gigabyte adds friction – in other words, the more gravity your data has, the harder it is to recover from disaster quickly. ... To push back against gravity, organizations are rethinking their architectures. Instead of forcing all data into one environment, they’re distributing it intelligently, keeping mission-critical workloads close to where they’re created, while replicating copies to nearby or complementary environments for protection. Hybrid and multi-cloud DR strategies have become the go-to solution for this. They blend the best of both worlds: the low-latency performance of local infrastructure with the flexibility and geographic reach of cloud storage. 


What’s Driving the EU’s AI Act Shake-Up?

The move to revise the AI Act follows sustained lobbying from US tech giants. In October, the Computer and Communications Industry Association (CCIA), whose members include Apple, Meta, and Amazon, launched a campaign pushing for simplification not only of the AI Act but of the EU’s entire digital rulebook. Meanwhile, EU officials have reportedly engaged with the Trump administration on these issues. ... The potential delay reflects pressure from national authorities. Denmark and Germany have both pushed for a one-year extension. A spokesperson from Germany’s Federal Ministry for Digital Transformation and Government Modernization said that a delay “would allow sufficient time for the practical application of European standards by AI providers, with standards still currently being elaborated.” ... Another major reform under consideration is expanding and centralizing oversight powers within the Commission’s AI Office. Currently responsible for general-purpose AI models (GPAI), the office would gain new authority to oversee all AI systems based on GPAI and conduct conformity assessments for certain high-risk systems. The Commission would also gain new authority to perform conformity assessments for certain high-risk systems and supervise online services deemed to pose “systemic risk” under the Digital Services Act. This would shift more power to Brussels and expand the mandate of the Commission’s AI Office beyond its current role supervising GPAI.


BITS & BYTES : The Foundational Lens for Enterprise Transformation

BITS serves as high-level strategic governance—ensuring balanced maturity assessments across business alignment, information-centric decision-making, technology enablement, and security resilience—while leveraging BDAT’s detailed sub-domains (layers and components) for tactical implementation and operational oversight. This allows organizations to maintain BDAT’s precision in decomposing complex IT landscapes (e.g., mapping specific data architectures or application portfolios) within BITS’s overarching pillars, fostering adaptive governance that scales from atomic “bits” of change to enterprise-wide transformations ... If BITS defines what must be managed, BYTES (Balanced Yearly Transformation to Enhance Services) define how change must be processed. BYTES is more than a set of principles; it is a derivative of the core architectural lifecycle: Plan (Balanced Yearly), Design& Build (Transformation Enhancing) , and Run (Services). Each component of BYTES directly maps to the mandatory stages of a continuous transformation framework, enabling architects to manage change at its source. ... The BITS & BYTES framework is not intended to replace existing architecture frameworks (e.g., TOGAF, Zachman, DAMA, IT4IT, SAFe). Instead, it acts as a meta-framework—a simplified, high-level matrix that accommodates and contextualizes the applicability of all existing models. 


Unlocking GenAI and Cloud Effectiveness With Intelligent Archiving

Unlike tiering, which functions like a permanent librarian selectively fetching individual files from deep storage, true archiving is a one-time event that moves files based on defined policies, such as last access or modification date. Once archived, files are stored on a long-term platform and remain accessible without reliance on any intermediary system or application. In this context, one of the main challenges is that most enterprise data is unstructured, including everything from images and videos to emails and social media content. Collectively, these vast and diverse data lakes present a formidable management challenge, and without rigorous control, organizations risk falling victim to the classic “garbage in, garbage out” problem. ... Modern archiving technologies that connect directly to both primary and archive storage platforms eliminate the need for a middleman, drastically improving migration speed, accuracy, and long-term data accessibility. This means organizations can migrate only what’s necessary, ensuring high-value data is cloud-ready while offloading cold data to cost-efficient archival platforms. This not only reduces cloud storage costs but also supports the adoption of cloud-native formats, enabling greater scalability and performance for active workloads. ... For modern enterprises, where more than 60% of enterprise data is typically inactive and often goes untouched for years, organizations are still consuming high-performance (and high-cost) storage.


Why 60% of BI Initiatives Fail (and How Enterprises Can Avoid It)

Many BI projects fail because goals and outcomes aren’t clearly defined. While enterprises may be confident that they understand BI gaps, often their goals are vague, lacking proper detailing and no internal consensus. ... Poor project management practices, vague processes, and changing responsibilities create even more confusion. In many failed BI projects, BI is viewed as “just another IT initiative,” whereas it should be treated as part of a business transformation program. Without active sponsorship and accountability, the technology may be delivered, but its adoption and impact suffer. ... Agile and iterative methods are often preferred since they are effective for BI. Whereas, the waterfall method is not recommended for BI projects since it lacks the necessary agility to adapt to changing requirements, iterative data exploration, and continuous business feedback. Under the waterfall approach, the users are engaged only in the beginning of the project and during the end, which leaves gaps for development or data analysis incase of issues. ... A system is only as good as the users who use it; research has shown that 55% of users lack confidence in BI tools due to insufficient training. Enterprises often expend considerable resources on deployment, but neglect enablement. If employees can’t find how to navigate dashboards, understand the data quality, data visualizations, or use insights to make daily decisions, the adoption rates suffer.


Authentication in the age of AI spoofing

Unlike traditional malware, which may find its way into networks through a compromised software update or downloads, AI-powered threats utilize machine learning to analyze how employees authenticate themselves to access networks, including when they log in, from which devices, typing patterns and even mouse movements. The AI learns to mimic legitimate behavior while collecting login credentials and is ultimately deployed to evade basic detection. ... Beyond the statistics, AI’s effectiveness is driven by its exponentially improving abilities to social engineer humans — replicating writing style, voice cadence, facial expressions or speech with subtle nuance and adding realistic context by scanning social media and other publicly available references. The data is striking and reflects the crucial need for a multi-layer approach to help sidestep the exponentially escalating ability for AI to trick humans. ... Cryptographic protection complements biometric authentication, which verifies “Is this the right person?” at the device level, while passkeys are used to verify “Is this the right website or service?” at the network level. Multi-modal biometrics, such as facial recognition plus fingerprint scanning or biometrics plus behavioral patterns, further strengthen this approach. As AI-powered attacks make credential theft and impersonation attacks more sophisticated, the only sustainable line of defense is a form of authentication that cannot be tricked or must be cryptographically verified. 


Why your security strategy is failing before it even starts

The biggest mistake I see among organizations is initiating cybersecurity efforts with technology rather than prioritizing risk and business alignment. Cybersecurity is often mischaracterized as a technical issue, when in reality it’s a business risk management function. Failure to establish this connection early often results in fragmented decision-making and limited executive engagement. Effective cybersecurity strategies should be embedded into business objectives from the outset. This requires identifying the business’s critical assets, assessing potential threats and motivations, and evaluating the impact of assets becoming compromised. Too often, CISOs jump straight into acquiring cybersecurity tools without addressing these questions. ... First, the threat landscape shifted dramatically. Cybersecurity attacks today target OT and ICS. In food manufacturing, those systems run production lines, refrigeration, and safety processes. A cyber incident in these areas extends beyond data loss, it can disrupt production and even compromise food safety, introducing a far more complex level of risk. Second, it became evident to me that cybersecurity cannot operate in isolation. It must support and enable business operations and growth. Today, my approach is risk-based and aligned with our business prioritizes, while still built on zero trust principles. We focus on resilience, not just compliance, and OT security is a core pillar of that strategy. 

Daily Tech Digest - November 13, 2025


Quote for the day:

“You get in life what you have the courage to ask for.” -- Nancy D. Solomon



Does your chatbot have 'brain rot'? 4 ways to tell

Oxford University Press, publisher of the Oxford English Dictionary, named "brain rot" as its 2024 Word of the Year, defining it as "the supposed deterioration of a person's mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging." ... Trying to draw exact connections between human cognition and AI is always tricky, despite the fact that neural networks -- the digital architecture upon which modern AI chatbots are based -- were modeled upon networks of organic neurons in the brain. ... That said, there are some clear parallels: as the researchers note in the new paper, for example, models are prone to "overfitting" data and getting caught in attentional biases ... If the ideal AI chatbot is designed to be a completely objective and morally upstanding professional assistant, these junk-poisoned models were like hateful teenagers living in a dark basement who had drunk way too much Red Bull and watched way too many conspiracy theory videos on YouTube. Obviously, not the kind of technology we want to proliferate. ... Obviously, most of us don't have a say in what kind of data gets used to train the models that are becoming increasingly unavoidable in our day-to-day lives. AI developers themselves are notoriously tight-lipped about where they source their training data from, which means it's difficult to rank consumer-facing models


7 behaviors of the AI-Savvy CIO

"The single most critical takeaway for CIOs is that a strong data foundation isn't optional -- it's critical for AI success. AI has made it easy to build prototypes, but unless you have your data in a single place, up to date, secured, and well governed, you'll struggle to put those prototypes into production. The team laying the groundwork for that foundation and getting enterprises' data AI-ready is data engineering. CIOs who still see data engineering as a back-office function are already five years behind, and probably training their future competitors. ... "Your data will never be perfect. And it doesn't have to be. It needs to be indicative of your company's reality. But your data will get a lot better if you first use AI to improve the UX. Then people will use your systems more, and in the way intended, creating better data. That better data will enable better AI. And the virtuous cycle will have begun. But it starts with the human side of the equation, not the technological." ... CIOs don't need deep technical mastery such as coding in Python or tuning neural networks -- but they must understand AI fundamentals. This includes grasping core AI principles, machine learning concepts, statistical modeling, and ethical implications. Mastery starts with CIOs understanding AI as an umbrella of technologies that automate different things. With this foundational fluency, they can ask the right questions, interpret insights effectively, and make informed strategic decisions. Let's look at the three AI domains.


The economics of the software development business

Some software companies quietly tolerated piracy, figuring that the more their software spread—even illegally—the more legitimate sales would follow in the long run. The argument was that if students and hobbyists pirated the software, it would lead to business sales when those people entered the workforce. The catchphrase here was “piracy is cheaper than marketing.” This was never an official position, but piracy was often quietly tolerated. ... Over the years, the boxes got thinner and the documentation went onto the Internet. For a time, though, “online help” meant a *.hlp file on your hard drive. People fought hard to keep that type of online help well into the Internet age. “What if I’m on an airplane? What if I get stranded in northern British Columbia?” Eventually, the physical delivery of software went away as Internet bandwidth allowed for bigger and bigger downloads. ... SaaS too has interesting economic implications for software companies. The marketplace generally expects a free tier for a SaaS product, and delivering free services can become costly if not done carefully. The compute costs money, after all. An additional problem is making sure that your free tier is good enough to be useful, but not so good that no one wants to move up to the paid tiers. ... The economics of software have always been a bit peculiar because the product is maddeningly costly to design and produce, yet incredibly easy to replicate and distribute. The years go by, but the problem remains the same: how to turn ideas and code into a profitable business?


Beyond the checklist: Shifting from compliance frameworks to real-time risk assessments

One of the most overlooked aspects of risk assessments is cadence. While gap analyses are sometimes done yearly or to prepare for large-scale audits, risk assessments need to be continuous or performed on a regular schedule. Threats do not respect calendar cycles. Major changes, including new technologies, mergers, regulatory changes or implementing AI, need to trigger reassessments. ... Risk assessments should culminate in outputs that business leaders can act on. This includes a concise risk heat map, a prioritized remediation roadmap and clear asks, such as budget, ownership and timelines. These deliverables convert technical findings into strategic decisions. They also help build trust with stakeholders, especially in organizations that may be new to formal risk management. ... Targeted risk assessments can be viewed as a low-cost, fundamental option. They are best suited to companies that have limited budget or are not prepared for a full review of the framework. With reduced scope, shorter turnaround and transparent business value, such assessments enable rapid establishment of trust, delivering prioritized outcomes. ... Risk assessments are not just checkboxes. They are tools for making decisions. The best programs are aligned with the business, focused, consistent and made to change over time.


Legitimate Interest as a Lawful Basis: Pros, Cons and the Indian DPDP Act’s Stance

Under the EU’s General Data Protection Regulation (GDPR), for example, a company can process data if it is “necessary for the purposes of the legitimate interests pursued by the controller or by a third party” (Article 6(1)(f) GDPR), so long as those interests are not overridden by the individual’s rights. However, India’s new Digital Personal Data Protection Act, 2023 (DPDP Act) pointedly does not include legitimate interest as a standalone lawful ground for processing. Instead, the Indian law relies primarily on consent and a limited set of “legitimate uses” explicitly enumerated in the statute. This divergence raises important questions about the pros and cons of the legitimate interest basis, its impact on the free flow of data, and whether India might benefit from adopting a similar concept. ... India’s decision to omit a general legitimate interest clause has sparked debate. There are advantages and disadvantages to this approach, and its impact on data flows and innovation is a key consideration. Pros / Rationale for Omission: From a privacy rights perspective, the absence of an open-ended legitimate interest basis means stronger individual control and legal certainty. The law explicitly tells citizens and businesses what the non-consensual exceptions are mostly common-sense or public interest scenarios and everything else by default requires consent.


CIOs: Collect the right data now for future AI-enabled services

In its Technology Trends Outlook for 2025 report, McKinsey suggests the technology landscape continues to undergo significant innovation-sponsored shifts. The consultant says success will depend on executives identifying high-impact areas where their businesses can use AI, while addressing external factors such as regulatory shifts and ecosystem readiness. CIOs, as the guardians of enterprise technology, will be expected to embrace this challenge, but how? For Steve Lucas, CEO at technology specialist Boomi, digital leaders must start with a recognition that the surfeit of data held in modern enterprises is simply a starting point for what comes next. “There’s plenty of data,” he says. “In fact, there’s too much of it. We worry about collecting, storing, and accessing data. I think a successful approach is about determining the data that matters. As a CIO, do you understand what data matters today and what emerging technologies will matter tomorrow?” ... While it can be tough to find the wood for the trees, Corbridge suggests CIOs should search for established data roots within the enterprise. “It’s about going back to the huge volumes of data you’ve got already and working out how you put that information in the right place so it can be used in the right way for your AI projects,” he says. Focusing on the fine details is an approach that chimes with Ian Ruffle, head of data and insight at UK breakdown specialist RAC. 


How TTP-based Defenses Outperform Traditional IoC Hunting

To fight modern ransomware, organizations must shift from chasing IoCs to detecting attacker behaviors — known as Tactics, Techniques, and Procedures (TTPs). The MITRE ATT&CK framework provides a detailed overview of these behaviors throughout the attack lifecycle, from initial access to impact. TTPs are challenging for attackers to modify because they represent core behavioral patterns and strategic approaches, unlike IoCs which are surface-level elements that can be easily altered. This shift is reinforced by the so-called ‘Pyramid of Pain’ – a conceptual model that ranks indicators by how difficult they are for adversaries to alter. At the base are easily changed elements like hash values and IP addresses. At the top are TTPs, which represent the attacker’s core behaviors and strategies. Disrupting TTPs forces adversaries to change their entire strategy, which makes behavior-based detection the most effective and resource-consuming method for them to avoid. ... When security and networking are natively integrated, policy enforcement is consistent, micro-segmentation is practical, and containment actions can be executed inline without stitching together multiple consoles. The cloud model also enables continuous, global updates to prevention logic and the ability to apply AI/ML on aggregated, high‑fidelity data feeds to reduce noise and improve detection quality. All this reminds me of the OODA military model that can help speed up incident response.


Healthcare security is broken because its systems can’t talk to each other

To maintain effectiveness, healthcare organizations should continually evaluate their security toolset for relevance, integration potential, and overall value to the security program. Prioritizing solutions that support open standards, and seamless integration helps minimize context switching and alert fatigue, while ensuring that the security team remains engaged and productive. Ultimately, the decision to balance specialized point solutions with broader integrated platforms must be guided by strategic priorities, resource capacity, and the need to support both operational efficiency and clinical excellence. ... A critical consideration is the interoperability of security tools across both cloud and on-premises environments. Healthcare organizations must assess if their security solutions need to span multiple cloud providers, support on-premises systems, or both, and determine how long on-premises support will be necessary as applications gradually shift to the cloud. Cloud providers are increasingly acquiring and integrating advanced security technologies, offering unified solutions that reduce the need for multiple monitoring platforms. This consolidation not only lessens alert fatigue but also enhances real-time visibility to security threats, an important advantage for healthcare, where timely detection is vital for protecting patient data and ensuring clinical continuity.


The Hard Truth About Trying to AI-ify the Enterprise

Every company has a few proofs of concept running. The problem isn't experimentation, it's scalability. How do you take those POCs and embed them into your business fabric? Many enterprises get trapped in "PowerPoint transformation": ambitious visions that never cross into operational workflows. "I've seen organizations invest millions in analytics platforms and data lakes. But when you ask how that's translating into faster underwriting, better risk models or smarter supply chains, there's often silence," Sen said. "That's because AI doesn't fail at the technology layer - it fails at the architecture of adoption." ... The central challenge, Sen argues, is that most enterprises treat AI as an overlay rather than an integral part of their operational core. "You can't bolt it on top of outdated systems and expect it to transform decision-making. The technology stack, data flow and governance model all need to evolve together," he said. That evolution is what Gartner describes as the pivot from "defending AI pilots to expanding into agentic AI ROI." Organizations that mastered generative AI in 2024 are now moving toward autonomous, interconnected systems that can execute tasks without human micromanagement. Sen points to his own experience at Linde plc as an early example. His team's first gen AI deployment at Linde was built for the audit department. "Our internal audit head wanted a semantic search database and a generative model to cut audit report generation time by half," he said.


Designing Edge AI That Works Where the Mission Happens

The fastest way to make federal AI deployments more reliable is to build on existing systems rather than start from scratch. Every mission already generates valuable data – drone imagery, satellite feeds, radar signals, logistics updates — that tells part of the operational story. AI at the edge helps teams interpret that data faster and more accurately. Instead of rebuilding infrastructure, agencies can embed lightweight, mission-specific AI models directly into their existing systems. ... When AI leaves the data center, its security model must accompany it. Systems operating at the edge face distinct risks, including limited oversight, contested networks, and the potential for physical compromise. Protection has to be built into every layer of the system, from the silicon up, to ensure full-scale security. That starts with end-to-end encryption, protecting data at rest, in transit, and during inference. Hardware-based features, such as secure boot and Trusted Execution Environments, prevent unauthorized code from running, while confidential computing keeps information encrypted even as it’s being processed. ... A decade ago, deploying AI in remote or contested locations required racks of hardware and constant connectivity. Today, a laptop or a single sensor array can deliver that same intelligence locally, securely, and autonomously. The power of AI and edge computing isn’t measured in size or speed; it’s in relevance.