Showing posts with label data strategy. Show all posts
Showing posts with label data strategy. Show all posts

Daily Tech Digest - April 24, 2026


Quote for the day:

"To strongly disagree with someone, and yet engage with them with respect, grace, humility and honesty, is a superpower." -- Vala Afshar


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 31 mins • Perfect for listening on the go.


Data debt: AI’s value killer hidden in plain sight

Data debt has emerged as a critical barrier to artificial intelligence success, acting as a "value killer" for modern enterprises. As CIOs prioritize AI initiatives, many are discovering that years of shortcuts, poor documentation, and outdated data management practices—collectively known as data debt—are causing significant project failures. Unlike traditional business intelligence, AI is uniquely unforgiving; it rapidly exposes deep-seated issues such as siloed information, inconsistent definitions, and missing context. Research suggests that delaying data remediation could lead to a 50% increase in AI failure rates and skyrocketing operational costs by 2027. This debt often accumulates through mergers, acquisitions, and the rapid deployment of fragmented systems without centralized governance. To address this growing threat, organizational leaders must treat data debt as a board-level risk rather than a simple technical glitch. Effective remediation requires more than just better technology; it demands a fundamental shift in organizational discipline and the standardization of core business processes. By establishing a reliable data foundation and rigorous governance, companies can prevent their AI ambitions from being stifled by sustained operational friction. Ultimately, addressing data debt is not just a prerequisite for scaling AI responsibly but a vital investment in long-term institutional stability and competitive advantage.


The Autonomy Problem: Why AI Agents Demand a New Security Playbook

As artificial intelligence transitions from passive chat interfaces to autonomous agents, the cybersecurity landscape faces a fundamental shift that renders traditional defense models insufficient. This evolution, often referred to as the "autonomy problem," stems from agents' ability to execute multi-step objectives, interact with APIs, and modify enterprise data independently without constant human intervention. Unlike standard software, agentic AI introduces dynamic risks such as prompt injection, excessive agency, and "logic hijacking," where an agent might be manipulated into performing unintended high-privilege actions. Consequently, security teams must move beyond static identity management and perimeter defense toward a runtime-centric strategy focused on continuous behavioral validation. A new security playbook for this era emphasizes "least privilege" for AI entities, ensuring agents only possess the temporary permissions necessary for a specific task. Furthermore, implementing robust observability and "Human-in-the-Loop" (HITL) checkpoints is critical for high-stakes decision-making. By treating AI agents as digital employees rather than simple tools, organizations can better manage the expanded attack surface. Ultimately, the goal is to balance the massive operational scale offered by autonomous systems with a governance framework that prioritizes transparency, real-time monitoring, and rigorous sandboxing to prevent self-directed machine speed from becoming a liability.


How indirect prompt injection attacks on AI work - and 6 ways to shut them down

Indirect prompt injection attacks represent a critical security vulnerability for Large Language Models (LLMs) that process external data, such as web content, emails, or documents. Unlike direct injections, where a user intentionally feeds malicious commands to a chatbot, indirect attacks occur when hackers hide instructions within third-party data that the AI is likely to retrieve. When the LLM parses this "poisoned" content, it may unknowingly execute the hidden commands, leading to serious risks like data exfiltration, the spread of phishing links, or unauthorized system overrides. For instance, a malicious website could contain hidden text telling an AI summarizer to ignore its safety protocols and send sensitive user information to a remote server. To mitigate these evolving threats, organizations are adopting multi-layered defense strategies, including rigorous input and output sanitization, human-in-the-loop oversight, and the principle of least privilege for AI agents. Major tech companies like Google, Microsoft, and OpenAI are also utilizing automated red-teaming and specialized machine learning classifiers to detect and block these subtle manipulations. For end-users, staying safe involves limiting the permissions granted to AI tools, treating AI-generated summaries with skepticism, and closely monitoring for any suspicious behavior that suggests the model has been compromised.


Advanced Middleware Architecture For Secure, Auditable, and Reliable Data Exchange Across Systems

The article "Advanced Middleware Architecture For Secure, Auditable, and Reliable Data Exchange Across Systems" by Abhijit Roy introduces a high-performance framework designed to bridge the critical gap between security, auditability, and efficiency in distributed environments. Utilizing a layered architecture built on Python and FastAPI, the proposed system integrates JWT-based stateless authentication with cryptographic integrity checks—such as SHA-256 hashing and HMAC signatures—to ensure non-repudiation and end-to-end traceability. By employing asynchronous message processing and standardized Pydantic data models, the middleware achieves a 100% transaction success rate and supports over 25 concurrent users, significantly outperforming legacy systems. Key results include a throughput of 6.8 messages per second and an average latency of 2.69 ms, with security overhead minimized to just 0.2 ms. This structured workflow facilitates seamless interoperability between heterogeneous platforms, making it highly suitable for mission-critical applications in sectors like healthcare, finance, and industrial IoT. The framework not only enforces consistent data validation and type safety but also enhances compliance efficiency through extensive logging and rapid audit retrieval times. Ultimately, the study demonstrates that robust security and detailed audit trails can be maintained without compromising system performance or scalability in complex multi-cloud or containerized settings.


The Performance Delta: Balancing Transaction And Transformation

Alexandra Zanela’s article exploring "The Performance Delta" emphasizes the critical necessity of balancing transactional and transformational leadership behaviors rather than viewing them as mutually exclusive personality traits. Transactional leadership serves as a vital foundation, providing organizational stability and psychological safety by establishing clear expectations, measurable goals, and contingent rewards. However, while transactions ensure tasks are fulfilled, they rarely inspire innovation. This is where transformational leadership—driven by the "four I’s" of idealized influence, inspirational motivation, intellectual stimulation, and individualized consideration—triggers the "augmentation effect." This effect creates a performance delta where effectiveness is multiplied rather than merely added, fostering employee growth, extra-role effort, and reduced burnout. As artificial intelligence increasingly automates the execution of routine transactional tasks like KPI monitoring and resource allocation, the role of the modern leader is shifting. Leaders are now tasked with designing the transactional frameworks while dedicating their freed capacity to human-centric transformational actions that AI cannot replicate, such as professional coaching and ethical vision-setting. Ultimately, thriving in the modern era requires leaders to master both modes, strategically toggling between them to maximize their team’s collective potential and successfully navigate profound organizational changes.


Digital Twins Could Be the Future of Proactive Cybersecurity

Digital twins are revolutionizing cybersecurity by providing dynamic, high-fidelity virtual replicas of IT, OT, and IoT infrastructures. According to the article, these "cyber sandboxes" enable organizations to transition from reactive defense to proactive, rehearsal-based strategies. By simulating sophisticated threats like ransomware campaigns and zero-day exploits within controlled environments, security teams can identify vulnerabilities and analyze the "blast radius" of potential breaches without risking production systems. The technical integration of AI further enhances these models, contributing to significant operational improvements, such as a 33% reduction in breach detection times and an 80% decrease in mean time to resolution. Beyond threat modeling, digital twins facilitate more effective network management and physical security optimization, allowing for the pre-deployment testing of firewall rules and access controls. This technology supports the "shift-left" and "shift-right" paradigms, ensuring security is embedded throughout the entire system lifecycle. Despite challenges regarding data integrity and implementation costs, the strategic adoption of digital twins—currently explored by 70% of C-suite executives—represents a transformative shift toward organizational resilience. By leveraging these real-time simulations, enterprises can validate security postures and implement targeted mitigation strategies, ultimately staying ahead of increasingly automated and stealthy cyberattackers in a complex digital landscape.


How to Manage Operations in DevOps Using Modern Technology

Managing operations in modern DevOps environments requires shifting from manual, queue-based workflows to a streamlined model focused on automation, visibility, and developer enablement. According to the article, modern operations encompass not just infrastructure and deployments but also security, compliance, and cost visibility. To handle these complexities, teams should prioritize automating repetitive tasks and codifying changes through Infrastructure as Code and policy-as-code tools like Open Policy Agent. These automated guardrails ensure consistency and compliance without hindering development speed. Furthermore, the strategic integration of Artificial Intelligence and AIOps can significantly reduce operational toil by identifying anomalies and grouping alerts, though humans must remain the final decision-makers regarding critical reliability. Observability tools provide deeper insights than traditional monitoring by correlating metrics, logs, and traces to diagnose system health in real-time. Perhaps most crucially, the article advocates for the creation of self-service platforms and internal developer portals, which empower engineers to manage their own services while maintaining strict operational standards. By embedding security into daily workflows and using data-driven metrics to track progress, organizations can transform their operations teams from bottlenecks into enablers of innovation. Ultimately, modern technology simplifies management by fostering a culture where the best path is also the easiest one for teams to follow.


Your Data Strategy Isn’t Ready for 2026’s AI, and Neither Is Anyone Else’s

The article argues that most current data strategies are woefully inadequate for the AI landscape expected by 2026. While organizations are currently fixated on basic Generative AI, they are failing to prepare for the rise of "agentic AI"—autonomous systems that require seamless, real-time data access rather than static reports. The central issue is that legacy architectures were designed primarily for human consumption, featuring siloed structures and slow governance processes that cannot support the high-velocity demands of sophisticated machine learning models. To bridge this gap, companies must prioritize "data liquidity" and shift toward AI-native infrastructures. This transformation requires moving away from traditional dashboards and investing in active metadata management, robust data observability, and automated quality controls. By 2026, the competitive divide will be defined by an organization’s ability to feed autonomous agents with high-fidelity, interconnected information. Consequently, businesses must stop viewing data as a passive asset and start treating it as a dynamic, scalable engine for automated decision-making. Failing to modernize these foundations now will leave enterprises unable to leverage the next generation of intelligence, rendering their current AI initiatives obsolete as the technology evolves into more complex, independent operational systems.


Agentic AI to autonomous enterprises: Are businesses ready to hand over decision-making?

The article by Abhishek Agarwal explores the transformative shift from traditional analytical AI to "agentic" systems, which are capable of planning and executing multi-step operational tasks without constant human intervention. Unlike previous AI iterations that merely provided insights for human review, agentic AI can independently manage complex workflows such as supplier selection, inventory management, and customer support. While the business case for these autonomous enterprises is compelling due to gains in speed, scalability, and consistency, the transition presents significant challenges regarding governance and accountability. Organizations must grapple with who is responsible for errors and whether their existing data infrastructure is mature enough to support reliable, large-scale decision-making. The debate over "human-in-the-loop" oversight remains central, with experts suggesting a domain-specific strategy where autonomy is reserved for well-defined, low-risk areas. Ultimately, the author emphasizes that becoming an autonomous enterprise is a strategic journey rather than a race. Success depends on building robust governance frameworks and ensuring high data quality to avoid accountability crises. Rushing into agentic AI prematurely could jeopardize long-term progress, making a thoughtful, honest assessment of readiness essential for any business aiming to leverage these powerful technologies for a sustainable competitive advantage in the modern digital landscape.


When Elite Cyber Teams Can’t Crack Web Security

The article "When Elite Cyber Teams Can’t Crack Web Security" by Jacob Krell explores the significant disparity between theoretical security credentials and practical defensive capabilities. Drawing from Hack The Box’s 2025 Global Cyber Skills Benchmark, which tested nearly 800 corporate security teams, Krell reveals a troubling reality: only 21.1% of these elite teams successfully identified and mitigated common web vulnerabilities. This performance gap persists across highly regulated sectors like finance and healthcare, suggesting that clean compliance audits and professional certifications often provide a false sense of security. The report highlights a "Certification Paradox," where industry-standard exams prioritize knowledge retention over the applied skills necessary to thwart real-world attacks. Furthermore, the abysmal 18.7% solve rate for secure coding challenges exposes the "Shift Left" movement as largely aspirational, with many organizations automating pipelines without cultivating security competency among developers. To address these systemic failures, Krell argues that businesses must move beyond "security theater" by implementing performance-based validations and continuous hands-on training. Ultimately, true resilience requires embedding security as a core craft within development teams rather than treating it as an external compliance checkbox, as attackers exploit practical skill gaps that tools and credentials alone cannot bridge.

Daily Tech Digest - January 01, 2026


Quote for the day:

"It always seems impossible until it’s done." -- Nelson Mandela



Why data trust is the missing link in digital transformation

Data trust is often framed as a technical issue, delegated to IT or data teams. In reality, it is a business capability with direct implications for growth, risk, and reputation. Trusted data enables organisations to: Confidently automate customer and operational workflows; Personalise experiences without introducing errors; Improve forecasting and performance reporting; and Reduce operational rework and exception handling When data cannot be trusted, leaders are forced to rely on manual checks, conservative assumptions, and duplicated processes. This increases cost and slows decision-making - the opposite of what digital transformation aims to achieve. .... Establishing data trust is not a one-time project. It requires a shift in mindset across the organisation. Data quality should be viewed as a shared responsibility, supported by the right processes and tools. Leading organisations embed data validation into their digital workflows, measure data quality as part of system health, and treat trusted data as a strategic asset. Over time, this creates a culture where decisions are made with confidence and transformation initiatives are more likely to succeed. ... Digital transformation is ultimately about enabling better decisions, faster execution, and stronger customer relationships. None of these goals can be achieved without trusted data. As organisations continue to modernise their platforms and processes, data quality should be treated as core infrastructure, not an afterthought. 


Health Data Privacy, Cyber Regs: What to Watch in 2026

When federal regulators hesitate, states often jump into filing privacy and security gaps involving health data. That includes mandates in New York to shore up cybersecurity at certain hospitals (see: New York Hospitals Are Facing Tougher Cyber Rules Than HIPAA). Also worth watching is the New York Health Information Privacy Act, Greene said. "It was passed by both New York legislative chambers in January but has not yet been formally submitted to the governor for signature, with lobbying efforts underway to amend it." "In its most recent version, it would be the toughest health privacy law in the country in many respects, including a controversial prohibition on obtaining consents for secondary uses of data until at least 24 hours after an individual creates an account or first uses the requested product or service," Greene said.  ... Greene predicted HIPAA resolution agreements and civil monetary penalties will continue much as they have in years past, with one to two dozen such cases next year. HHS has recently indicated that it intends to begin enforcing the Information Blocking Rule. "The primary target will be health IT developers," Greene said. "I expect that there are less information blocking issues with health information networks and believe that the statute and regulation's knowledge standard makes it more challenging to enforce against healthcare providers because the government must prove that a healthcare provider knew its practice to be unreasonable."


From integration pain to partnership gain: How collaboration strengthens cybersecurity

When collaborators leverage data in specific cybersecurity work, they unlock several valuable benefits, especially since no organization has complete insight into every possible threat. A shared, data-driven cybersecurity framework can offer both sides a better understanding of existing and emerging threats that could undermine one or both collaborators. Data-driven collaboration also enables partners to become more proactive in their cybersecurity posture. Coordinated data can give business partners insights into where there’s greater exposure for a cyberattack, allowing partners to work together with data-backed guidance on how to better prepare. ... The Vested model — an innovative approach based on research from the University of Tennessee — focuses on shared goals and outcomes rather than traditional transactional buyer and seller agreements. Both companies agreed on a specific set of KPIs they could use to measure the health of the partnership and keep their security goals on track, allowing them to continue to adapt cybersecurity initiatives as needs and threats evolve. “You have to build, maintain and exercise the right partnerships with business units and shared services across the enterprise so continuity plans identify the issue quickly, deploy appropriate mitigations, and ultimately restore client and business services as quickly as possible,” says Royce Curtin, IBM’s former VP of corporate security.


AI governance: A risk and audit perspective on responsible AI adoption

AI governance refers to the policies, procedures, and oversight mechanisms that guide how AI systems are developed, deployed, and monitored. It ensures that AI aligns with business objectives, complies with applicable laws, and operates in a way that is ethical and transparent. Regulatory scrutiny is increasing. The EU AI Act is setting a precedent for global standards, and U.S. agencies are signaling more aggressive enforcement, particularly in sectors like healthcare, finance, and employment. Organizations are expected to demonstrate accountability in how AI systems make decisions, manage data, and interact with users. Beyond regulation, there is growing pressure from customers, employees, and investors. ... Audit teams also help boards and audit committees understand the risks associated with AI. Their work supports transparency and builds trust with regulators and stakeholders. As AI becomes more embedded in business operations, internal audit must expand its scope to include model governance, data lineage, and ethical risk. ... Organizations that treat AI as a strategic risk are better positioned to scale it responsibly. Risk and internal audit teams have a central role in ensuring that AI systems are secure, compliant, and aligned with business goals. Citrin Cooperman helps organizations navigate AI adoption with confidence by combining deep risk expertise, practical governance frameworks, and advanced technology solutions that support secure, scalable, and compliant growth.


Six data shifts that will shape enterprise AI in 2026

While RAG won't entirely disappear in 2026, one approach that will likely surpass it in terms of usage for agentic AI is contextual memory, also known as agentic or long-context memory. This technology enables LLMs to store and access pertinent information over extended periods. Multiple such systems emerged over the course of 2025 including Hindsight, A-MEM framework, General Agentic Memory (GAM), LangMem, and Memobase. RAG will remain useful for static data, but agentic memory is critical for adaptive assistants and agentic AI workflows that must learn from feedback, maintain state, and adapt over time. In 2026, contextual memory will no longer be a novel technique; it will become table stakes for many operational agentic AI deployments. ... In 2025, we saw numerous innovations, like the notion that an AI is able to parse data from an unstructured data source like a PDF. That's a capability that has existed for several years, but proved harder to operationalize at scale than many assumed. Databricks now has an advanced parser, and other vendors, including Mistral, have emerged with their own improvements. The same is true with natural language to SQL translation. While some might have assumed that was a solved problem, it's one that continued to see innovation in 2025 and will see more in 2026. It's critical for enterprises to stay vigilant in 2026. 


Communicating AI Risk to the Board With Confidence

Most board members can comprehend that AI will drive growth. What they fail to grasp concretely is how the technology introduces a massive amount of exposure. This predicament is typically a result of how information is presented. Security and risk managers (SRMs) often describe AI incidents in the vocabulary of adversarial inputs, model drift, and architecture choices, which matter deeply but rarely answer the questions that directors tackle during their meetings. High-level stakeholders, in reality, are concerned with issues such as revenue protection, operational continuity, and competitive differentiation, creating a gap that requires more than translating acronyms. ... Traditional discussions about technology risk revolve around the triad of confidentiality, integrity, and availability. Boards know these categories well, and over the past few decades, they have learned that cybersecurity failures directly affect the business along these lines. GenAI has formidably challenged this familiar structure, with its associated risks not limited to one of these three domains.  ... When the conversation begins with the business consequence, though, the relevance is immediate. The most effective approach involves replacing those mechanics that mean so much to the internal teams with the strategic information boards need to operate. These details open a path for meaningful conversations that encourage directors to think through the implications and make more informed decisions. 


The six biggest security challenges coming in 2026

For many organizations, cybersecurity and resilience is a compliance exercise. But it must evolve into “a core intentional cybersecurity capability”, says Dimitriadis. “In 2026, organizations will need to build the capacity to anticipate regulatory changes, understand their strategic implications, and embed them into long-term planning.” ... Attackers are leveraging AI to create convincing email templates and fake websites “almost indistinguishable” from real ones – and without the common warning signs employees are trained to identify, says Mitchell. AI is also being used in vishing attacks, with deepfakes making it easier to clone the voice of high-ranking company executives to trick victims. In 2026, there will be more attacks utilizing realistic voice cloning and high-quality video deepfakes, says Joshua Walsh ... There is a current shift towards agentic AI that can take real-world actions, such as adjusting configurations, interacting with APIs, booking services and initiating financial tasks. This can increase efficiency, but it can also lead to unsafe decisions made at speed, says rradar’s Walsh. An agent told to "optimize performance" might disable logging or bypass authentication because it views security controls as delays, he suggests. Prompt injection is a hidden issue to look out for, he adds. “If a threat actor slips hidden instructions into data that the agent consumes, they can make it run actions on internal systems without anyone realising.” 


5 Changes That Will Define AI-Native Enterprises in 2026

As enterprises scale to multi-agent systems, the engineering focus will shift from creating prompts to architecting context. Multi-agent workflows rapidly expand requirements with tool definitions, conversation history, and data from multiple sources. This creates two challenges: context windows fill up, and models suffer from “context rot,” forgetting information buried in lengthy prompts. By mid-2026, context engineering will emerge as a distinct discipline with dedicated teams and specialized infrastructure, serving the minimal but complete information agents need. The best context engineers will understand both LLM constraints and their business domain’s semantic structure. ... Enterprises are realizing that AI agents need both data and meaning. Companies that spent years perfecting data lakes are already finding those assets are insufficient. AI can retrieve data, but without semantic context, it can’t interpret action or intent. That’s why teams will move beyond vector search toward building knowledge graphs, ontologies, and metadata-driven maps that teach AI how their business works. The battleground will shift from owning raw data to owning its interpretation. Off-the-shelf agents will struggle in complex domains because semantics are domain-specific. ... The AI-native enterprise looks very different from what came before. It serves machine customers, treats context as critical infrastructure, and has the tools to escape decades of technical debt. 


Microsegmentation: the unsung hero of cybersecurity (and why it should be your top priority)

Think of your network like an apartment building. You’ve got a locked front door — that’s your perimeter. But once someone gets inside, there’s no front desk checking IDs, no elevator security and the same outdated lock on every unit. An intruder can roam freely, entering any apartment they choose. Microsegmentation is the internal security system. It’s the keycard for the elevator, the camera in the hallway, the unique lock on your door. It’s what stops one compromised device from becoming a full-blown breach. ... OT environments are different. They’re often built on legacy systems, lack patching and operate in real-time. You can’t just drop an agent or reroute traffic without risking downtime. That’s why agencies need solutions that are agentless, software-defined and tailored to the unique constraints of OT. Otherwise, you’re only protecting half the house. ... Microsegmentation also plays a critical role in enabling zero trust. It enforces least privilege at the network level. It’s not just about who gets in; it’s about what they can touch once they’re inside. For agencies building toward zero trust, microsegmentation isn’t an afterthought. It’s a foundation. Despite all this, microsegmentation remains underutilized. According to TechTarget’s Enterprise Strategy Group, only 36% of organizations use it today, even though it’s foundational to zero trust. Why? Because 28% believe it’s too complex. But that perception is often rooted in outdated tooling.


Beyond Chatbots: What Makes an AI Agent Truly Autonomous

Autonomous agents must retain and use context over time. Memory enables an agent to recall previous interactions, data, and decisions—allowing it to continue a process seamlessly without restarting each time. That persistence turns single exchanges into long-running workflows. In enterprise settings, it means an agent can track a contract review across multiple sessions or follow a complex support case without losing context. ... Traditional automation runs on fixed, rule-based workflows. Autonomous agents build and revise their own plans on the fly, adapting to results and feedback. This ability to plan dynamically—think, act, observe, and adjust—is what differentiates agentic AI from robotic process automation (RPA) or prompt chaining. In practice, an agent might be tasked with analyzing a set of contracts, then automatically decide how to proceed: extract key terms, assess risk, and summarize results. ... Resilient agents are designed to operate across models, retry failed actions, or launch sub-agents to handle specialized work—all within defined guardrails. That adaptability is what separates a proof of concept (POC) from a production-ready system. ... All the reasoning in the world means little if an agent can’t execute. Tools are what translate intelligence into impact. They’re the functions, APIs, and integrations that allow agents to interact with business systems—searching systems, generating documents, updating records, or triggering workflows across CRMs, ERPs, and analytics platforms.

Daily Tech Digest - December 31, 2025


Quote for the day:

“To be successful you need friends and to be very successful you need enemies.” -- Sidney Sheldon



AI agents to outnumber humans, warns Token Security

Many agents now run in controlled, non-production environments. Token Security predicts that organisations will soon connect them directly to live systems. The company says this will force enterprises to manage agent permissions and lifecycle controls more actively. It also expects new processes for assigning accountability when an autonomous system carries out an action on behalf of a team or individual. Apelblat believes established compliance structures will not cope with this change in the workforce. Traditional frameworks assume that humans sit at the centre of most workflows. ... "Despite innovation in agentic AI, enterprises will enter 2026 still relying on static API keys and long-term credentials. These legacy mechanisms will quietly weaken agent identity integrity, creating fragile trust chains that attackers can easily exploit," said Shlomo. Shlomo also predicts a reversal of some progress made in reducing secrets stored on endpoints. Many companies have moved staff onto single sign-on and centralised identity systems. He believes poor management of local Model Context Protocol servers will now cause a resurgence of cleartext service credentials on employee devices. ... "The industry is shifting from breaches caused by human identity failures to breaches rooted in AI agent identity compromise. As agents become operational backbones, attacks targeting their tokens, personas, and delegated authority will define the next wave of high-impact incidents," said Shlomo.


AI killed the cloud-first strategy: Why hybrid computing is the only way forward now

Existing infrastructures now configured with cloud services simply may not be ready for emerging AI demands, a recent analysis from Deloitte warned. "The infrastructure built for cloud-first strategies can't handle AI economics," the report, penned by a team of Deloitte analysts led by Nicholas Merizzi, said. "Processes designed for human workers don't work for agents. Security models built for perimeter defense don't protect against threats operating at machine speed. IT operating models built for service delivery don't drive business transformation." ... AI token costs have dropped 280-fold in two years, they observe -- yet "some enterprises are seeing monthly bills in the tens of millions." The overuse of cloud-based AI services "can lead to frequent API hits and escalating costs." There's even a tipping point in which on-premises deployments make more sense. ... AI often demands near-zero latency to deliver actions. "Applications requiring response times of 10 milliseconds or below cannot tolerate the inherent delays of cloud-based processing," the Deloitte authors point out. ... Resilience is also part of the pressing requirements for fully functional AI processes. These include "mission-critical tasks that cannot be interrupted require on-premises infrastructure in case connection to the cloud is interrupted," the analysts state. ... Whether employing cloud or on-premises systems, companies should always take direct responsibility for security and monitoring, Rana said.


Agentic AI breaks out of the lab and forces enterprises to grow up

The first major stride is the shift from improvisation to repeatable patterns. Early agentic projects were nearly all “glue code”, prompt chains stitched together with brittle tool wiring and homegrown memory hacks. Every workflow was a snowflake. But now, mature organizations are creating shared agentic primitives that development teams can reuse. ... The second major stride is the rise of enterprise grade governance and safety frameworks designed specifically for agentic workflows. Traditional AI governance wasn’t built for systems that take autonomous actions, call tools, modify infrastructure, and reason over long sequences. Enterprises are now treating governance as a first class engineering challenge. ... The third stride is a philosophical and architectural shift in where enterprises choose to invest. Many companies spent months crafting custom planning modules, memory layers, tool registries, and agent routers, believing these would become strategic assets. But experience is proving otherwise. ... The fourth and most important stride is the move toward building durable components that will matter long after orchestration layers become commoditized. Enterprises increasingly understand that their competitive advantage will come from institutional intelligence: domain specific tool schemas, curated datasets, validated decision policies, and deep integration with their existing SDLC, incident response, and SOC workflows.


Businesses have always complained about compliance obligations. Could they automate themselves out of it?

Compliance can often seem like an exercise in Kafkaesque absurdity. Nutanix’s director of systems engineering, James Sturrock, says it’s not uncommon for two in-house experts to have differing opinions on how to solve the same thorny regulatory conundrum. That isn’t even getting into how competing jurisdictions might view the problem. ... Equally important are potential unknowns such as contaminated soil or sewers that don’t appear on maps or where data is incomplete. These don’t just represent potential holdups to work – and resulting penalties – but represent further risks in themselves. ... Automating alerts or making it easier to spot compliance headaches early is one thing. But what might AI contribute toward simplifying more complex compliance conundrums, like those encountered by the financial services industry? In that sector, explains Pegasystems’ global banking industry lead Steve Morgan, such models have to be readily explainable not only to customers, but internal audit teams and regulators, too. Even then, it’s already clear that certain types of AI applications aren’t completely suitable for insertion into compliance workflows – most notably, GenAI. “Unless you have a very special model that’s trained” on a specific use case, says Morgan, the answers that such models provide compliance experts just aren’t predictable or accurate enough to meet the high standards demanded of banks.


Security coverage is falling behind the way attackers behave

Cybercriminals keep tweaking their procedures, trying out new techniques, and shifting tactics across campaigns. Coverage that worked yesterday may miss how those behaviors appear today. ... Activity expanded from ransomware driven campaigns into espionage aligned behavior, with targets including telecom, energy, military, and government organizations. Researchers tracked changes in tooling, credential access, and detection evasion, including expanded use of advanced techniques against cloud and enterprise environments. ... The report describes zero-day use as commoditized. Exploits move quickly from discovery into active abuse. This compresses defender response windows from weeks into days. Early detection depends on identifying behavior tied to exploitation rather than waiting for vulnerability disclosures or patches. ... Identity became a primary target. Campaigns focused on SaaS access, cloud administration, and single sign-on abuse. Luna Moth evolved from simple callback phishing into multi-channel operations combining voice, email, and infrastructure control. ... One theme that runs through the findings is the presence of defensive gaps at the procedure level. Many organizations track techniques and tools, while execution details that signal intent receive less attention. The research connects observed procedures directly to detection and prevention controls, showing where coverage holds and where it breaks down.


Widely Used Malicious Extensions Steal ChatGPT, DeepSeek Conversations

Stolen browser history data includes not only the complete URLs from all Chrome tabs, but also search queries containing sensitive keywords and research topics, URL parameters that could contain session tokens, user IDs, and authentication data, and internal corporate URLs revealing organizational structure and tools. ... Extensions are used to improve and customize users’ browsing experience. More people are using browsers, which can expand the attack surface of the individual and the companies they work for, according to security experts. “Browser extensions aren’t niche tools anymore; they’re deeply embedded in how people work,” Grip Security researchers Ben Robertson and Guy Katzir wrote earlier this year. “But that convenience comes with risk, especially when security teams don’t have visibility into what’s installed, what it can access, or how it behaves after login. The attack surface has shifted. And while endpoint agents and network controls still matter, they can’t see what’s happening inside the browser. That’s where threats like token hijacking and data leakage quietly take shape.” ... In the most recent case, the hackers created malicious extensions that impersonated a legitimate browser created by a company called AITOPIA. The extension puts a sidebar onto any website to give users the ability to chat with popular AI LLMs, OX Security’s Siman and Bustan wrote. 


2026: The year we stop trusting any single cloud

The real story is not that cloud platforms failed; it’s that enterprises quietly allowed those platforms to become single points of failure for entire business models. In 2025, many organizations discovered that their digital transformation had traded physical single points of failure for logical ones in the form of a single region, a single provider, or even a single managed database. When a hyperscaler region had trouble, companies learned the hard way that “highly available within a region” is not the same as “business resilient.” What caught even seasoned teams off guard was the hidden dependency chain. ... Expect to see targeted workload shifts that move critical customer-facing systems from single-region to multi-region or cross-cloud setups, re-architecting data platforms with replicated storage and active-active databases (meaning that we have two running, with one backing up the other). Also, relocating some systems to private or colocation environments based on risk. ... In 2026, smart enterprises will start asking their vendors the hard questions. Which regions and providers do you use? Do you have a tested failover strategy across regions or providers? What happens to my data and SLAs if your primary cloud has a regional incident? Many will diversify not just across hyperscalers, but across SaaS and managed services, deliberately avoiding over-concentration on any provider that cannot demonstrate meaningful redundancy.


AI Is Forcing Businesses To Rethink Their Data Strategies

One of the biggest misconceptions about cloud repatriation is that it’s a simple reversal of a cloud migration. In reality, AI workloads frequently exceed the capabilities of existing on-prem infrastructure. “Servers that were procured three years ago may not be able to handle what these applications require,” Brodsky says. As a result, repatriation decisions often trigger broader modernization efforts, including new hardware, increased power and cooling capacity, and redesigned architectures. Before making those investments, organizations need a clear understanding of their current environment and future requirements. ... “You have to evaluate whether your on-prem environment can actually ingest and protect what you’re bringing down from the cloud,” he says. Timelines and approaches vary. Some organizations opt for high-level assessments to guide strategy, while others pursue deeper technical workshops or phased transitions based on business priorities and service-level agreements. Despite the renewed interest in on-prem infrastructure, cloud repatriation doesn’t signal a retreat from cloud computing. Instead, it reflects a more mature understanding of hybrid IT. “Five years ago, we had daily conversations with customers who wanted to be 100% cloud,” Brodsky says. “Very few actually got there.” Today, most organizations operate hybrid environments by necessity, balancing cloud flexibility with on-prem performance, cost predictability and governance. 


AI-Driven CLM: The New Standard for Enterprise Contracts

Most enterprises still rely on fragmented approaches to contract management. Agreements live in email threads, local folders, and legacy systems that do not communicate with each other. Legal teams spend hours searching for documents that should be accessible in seconds. This disorganization creates real business consequences. Contracts expire without renewal. Compliance obligations go untracked. Revenue recognition gets delayed because finance cannot locate the signed agreement. ... AI-driven contract lifecycle management takes a fundamentally different approach. Instead of treating contracts as paperwork to be stored, modern CLM platforms treat them as data to be analyzed, monitored, and optimized. The shift starts with intelligent data extraction. When a contract enters the system, AI automatically identifies and extracts key terms, dates, obligations, and clauses. No more manual data entry. No more inconsistent tagging. The system understands what it is reading and organizes information accordingly. ... Every contract carries risk. Hidden indemnification clauses, unfavorable liability terms, and non-standard language can expose organizations to significant liability. Catching these issues manually requires experienced legal reviewers and substantial time. AI changes this equation. Modern CLM platforms scan agreements against predefined playbooks and flag deviations instantly. 


How to Do Enterprise Autonomy Right

Autonomous enterprise agents are architected differently. They integrate language understanding, taking calls, planning and orchestration into a closed loop. This allows the agent to assess goals, interpret inputs, break them down into tasks and execute across multiple systems. It can adapt when conditions change and learn from feedback over time. The shift from automation to autonomy requires moving from flow-based design to intent-based execution. For enterprises, this means embedding capabilities that allow agents to sense, decide and act in real time. ... It's non-negotiable for agents to function only within clearly defined domains, with visibility restricted to authorized data and systems. Second, their decision-making logic should be transparent and traceable, ensuring that every outcome can be audited and explained. Third, controls must exist to intervene in real time, whether to pause, override or shut down the agent entirely. Lastly, it is crucial for agents to be built to fail safely. If context shifts beyond their training, the agent must escalate or defer. This is not a fallback but rather is a core design principle that reinforces responsible AI posture. ... The line between productive autonomy and dangerous overgeneralization is best drawn where explainability ends. If a system's actions can no longer be explained in business terms, it is no longer serving the enterprise. Control is central to it and autonomy should expand only when safeguards, governance and organizational readiness evolve alongside it.

Daily Tech Digest - December 27, 2025


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



Leading In The Age Of AI: Five Human Competencies Every Modern Leader Needs

Leaders are surrounded by data, metrics and algorithmic recommendations, but decision quality depends on interpretation rather than volume. Insight is the ability to turn information and diverse perspectives into clarity. It requires curiosity, patience and the humility to question assumptions. Leaders who demonstrate this capability articulate complex issues clearly, invite dissent before deciding and translate analysis into meaningful direction. ... Integration is the capability to design environments where human creativity and machine intelligence reinforce one another. Leaders strong in this capability align technology with purpose and culture, encourage experimentation and ensure that tools enhance human capability rather than replacing reflection and judgment. The aim is capability at scale, not efficiency at any cost. ... Inspiration is the ability to energize people by helping them see what is possible and how their work contributes to a larger purpose. It is grounded optimism rather than polished enthusiasm. Leaders who inspire use story, clarity and authenticity to create shared commitment rather than simple compliance. When purpose becomes personal, contribution follows. ... It is not only about speed or quarterly numbers. It is about sustainable value for people, organizations and society. Leaders strong in this capability balance performance with well-being and growth, adapt strategy based on real feedback and design systems that strengthen capacity over time instead of exhausting it.


Big shifts that will reshape work in 2026

We’re moving into a new chapter where real skills and what people can actually do matter more than degrees or job titles. In 2026, this shift will become the standard across organisations in APAC. Instead of just looking for certificates, employers are now keen to find people who can show adaptability, pick up new things quickly, and prove their expertise through action. ... as helpful as AI can be, there’s a catch. Technology can make things faster and smarter, but it’s not a substitute for the human touch—creativity, empathy, and making the right call when it matters. The real test for leaders will be making sure AI helps people do their best work, not strip away what makes us human. That means setting clear rules for how AI is used, helping employees build digital skills, and keeping trust at the centre of it all. Organisations that succeed will strike a balance: leveraging AI’s analytical power to unlock efficiencies, while empowering people to focus on the relational, imaginative, and moral dimensions of work. ... Employee wellbeing is set to become the foundation of the future of work. No longer a peripheral benefit or a box to check, wellbeing will be woven into organisational culture, shaping every aspect of the employee experience. ... Purpose is emerging as the new currency of talent attraction and retention, particularly for Gen Z and millennials, who are steadfast in their desire to work for organisations that reflect their personal values. 


How AI could close the education inequality gap - or widen it

On one side are those who say that AI tools will never be able to replace the teaching offered by humans. On the other side are those who insist that access to AI-powered tutoring is better than no access to tutoring at all. The one thing that can be agreed on across the board is that students can benefit from tutoring, and fair access remains a major challenge -- one that AI may be able to smooth over. "The best human tutors will remain ahead of AI for a long time yet to come, but do most people have access to tutors outside of class?" said Mollick. To evaluate educational tools, Mollick uses what he calls the "BAH" test, which measures whether a tool is better than the best available human a student can realistically access. ... AI tools that function like a tutor could also help students who don't have the resources to access a human tutor. A recent Brookings Institution report found that the largest barrier to scaling effective tutoring programs is cost, estimating a requirement $1,000 to $3,000 per student annually for high-impact models. Because private tutoring often requires financial investment, it can drive disparities in educational achievement. Aly Murray experienced those disparities firsthand. Raised by a single mother who immigrated to the US from Cuba, Murray grew up as a low-income student and later recognized how transformative access to a human tutor could have been. 


Shift-Left Strategies for Cloud-Native and Serverless Architectures

The whole architectural framework of shift-left security depends on moving critical security practices earlier in the development lifecycle. Incorporating security in the development lifecycle should not be an afterthought. Within this context, teams are empowered to identify and eliminate risks at design time, build time, and during CI/CD — not after. These modern workloads are highly dynamic and interconnected, and a single mishap can trickle down across the entire environment. ... Serverless Functions can introduce issues if they run with excessive privileges. This can be addressed by simply embedding permissions checks early in the development lifecycle. A baseline of minimum required identity and access management (IAM) privileges should be enforced to keep development tight. Wildcards or broad permissions should be leveraged in this context. Also, it makes sense to use runtime permission boundary generation — otherwise, functions can be compromised without appropriate safeguards. ... In modern-day cloud environments, it is crucial that observability is considered a major priority. Shifting left within the context of observability means logs, metrics, traces, and alerts are integrated directly into the application from day one. AWS CloudWatch or DataDog metrics can be integrated into the application code so that developers can keep an eye on the critical behaviors of the application. 


Agentic AI and Autonomous Agents: The Dawn of Smarter Machines

At their core, agentic AI and autonomous agents rely on a few powerhouse components: planning, reasoning, acting, and tool integration. Planning is the blueprint phase the AI breaks a goal into subtasks, like mapping out a road trip with stops for gas and sights. Reasoning kicks in next, where it evaluates options using logic, past data, or even ethical guidelines (more on that later). Acting is the execution: interfacing with the real world via APIs, databases, or even physical robots. And tool integration?  ... Diving deeper, it’s worth comparing agentic AI to other paradigms to see why it’s a game-changer. Standalone LLMs, like basic GPT models, are fantastic for generating text but falter on execution — they can’t “do” things without external help. Agentic systems bridge that by embedding action loops. Multi-agent setups take it further: Imagine a team of specialized agents collaborating, one for research, another for analysis, like a virtual task force. ... Looking ahead, the future of agentic AI feels electric yet cautious. By 2030, I predict multi-agent collaborations becoming standard, with advancements in human-in-the-loop designs to mitigate ethics pitfalls — like ensuring transparency in decision-making or preventing job displacement. OpenAI’s push for standardized frameworks addresses this, but we must grapple with questions: Who owns the data agents learn from? How do we audit autonomous actions? 


Operationalizing Data Strategy with OKRs: From Vision to Execution

For any business, some of the most critical data-driven initiatives and priorities include risk mitigation, revenue growth, and customer experience. To drive more effectiveness and accuracy in such business functions, finding ways to blend the technical output and performance data with tangible business outcomes is important. You must also proactively assess the shortcomings and errors in your data strategy to identify and correct any misaligned priorities. ... OKRs can empower data teams to leverage analytics and data sources to deliver highly actionable, timely insights. Set measurable and time-bound objectives to ensure focus and drive tangible progress toward your goals by leveraging an OKR platform, creating visually appealing dashboards, and assigning accountability to employees. ... If your high-level vision is “to become a data-driven organization,” the most effective way to work toward it is to break it into specific and measurable objectives. More importantly, consider segmenting your core strategy into multiple use cases, like operations optimization, customer analytics, and regulatory compliance. With these easily trackable segments, improve your focus and enable your teams to deliver incremental value. ... By tying OKRs with processes like governance and quality, you can ensure that they become measurable and visible priorities, causing fewer incidents and building confidence in analytics-based projects and processes.


This tiny chip could change the future of quantum computing

At the heart of the technology are microwave-frequency vibrations that oscillate billions of times per second. These vibrations allow the chip to manipulate laser light with remarkable precision. By directly controlling the phase of a laser beam, the device can generate new laser frequencies that are both stable and efficient. This level of control is a key requirement not only for quantum computing, but also for emerging fields such as quantum sensing and quantum networking. ... The new device generates laser frequency shifts through efficient phase modulation while using about 80 times less microwave power than many existing commercial modulators. Lower power consumption means less heat, which allows more channels to be packed closely together, even onto a single chip. Taken together, these advantages transform the chip into a scalable system capable of coordinating the precise interactions atoms need to perform quantum calculations. ... The researchers are now working on fully integrated photonic circuits that combine frequency generation, filtering, and pulse shaping on a single chip. This effort moves the field closer to a complete, operational quantum photonic platform. Next, the team plans to partner with quantum computing companies to test these chips inside advanced trapped-ion and trapped-neutral-atom quantum computers.


The 5-Step Framework to Ensure AI Actually Frees Your Time Instead of Creating More Work

Success with AI isn’t measured by the number of automations you have deployed. True AI leverage is measured by the number of high-value tasks that can be executed without oversight from the business owner. ... Map what matters most — It’s critical to focus your energy on where it matters the most. Look through your processes to identify bottlenecks and repetitive decisions or tasks that don’t need your input. ... Design roles before rules — Figure out where you need human ownership in your processes. These will be activities that require traits like empathy, creative thinking and high-level strategy. Once the roles are established, you can build automation that supports those roles. ... Document before you delegate — Both humans and machines need clear direction. Be sure to document any processes, procedures, and SOPs before delegating or automating them. ... Automate boring and elevate brilliant — Your primary goal with automation is to free up your time for creating, strategy and building relationships. Of course, the reality is that not everything should be automated. ... Measure output, not inputs — Too many entrepreneurs spend their time focused on what their team and AI agents are doing and not what they are achieving. Intentional automation requires placing your focus on outputs to ensure the processes you have in place are working effectively, or where they can be improved. 


The next big IT security battle is all about privileged access

As the space matures, privileged access workflows will increasingly depend on adaptive authentication policies that validate identity and device posture in real time. Vendors that offer flexible passwordless frameworks and integrations with existing IAM and PAM systems will see increased market traction. This will mark a shift in the promised end of passwords, eliminating one of the most exploited attack vectors in privilege abuse and account takeovers. ... Instead of relying solely on human auditors or predefined rules, IAM/PAM solutions will use generative AI to summarize risky session activities, detect lateral movement indicators, and suggest remediations in real time. AI-assisted security will make privileged access oversight continuous and contextual, helping enterprises detect insider threats and compromised accounts faster than ever before. This will also move the industry toward autonomous access governance. ... Compromised privileged credentials will remain the single most direct path to data loss, and a sharp rise in targeted breaches, ransomware campaigns, and supply-chain intrusions involving administrative accounts will elevate IAM/PAM to a board-level concern in 2026. Enterprises will accelerate investments in vendor privileged access tools to mitigate risk from contractors, managed service providers, and external support staff.


Mentorship and Diversity: Shaping the Next Generation of Cyber Experts

For those considering a career in cybersecurity, Voight's advice is both practical and inspiring: follow your passion and embrace the industry's constant evolution. Whether you're starting in security operations or exploring niche areas like architecture and engineering, the key is to stay curious and committed to learning. As artificial intelligence and automation reshape the field, Voight remains optimistic, assuring that human expertise will always be essential, encouraging aspiring professionals to dive into a field brimming with opportunity, innovation, and the chance to make a meaningful impact. ... Cybersecurity is fascinating and offers many paths of entry. You don't necessarily need a specific academic program to get involved. The biggest piece is having a passion for it. The more you love learning about this industry, the better it will be for you in the long run. It's something you do because you love it. ... Sometimes, it's the people and teams you work with that make the job exciting. You want to be doing something new and exciting, something you can embrace and contribute to. Keep an open mind to all the different paths. There isn't one direct path, and not everyone will become a Chief Information Security Officer (CISO). Being a CISO may not be the role everyone imagines it to be when considering the responsibilities involved.

Daily Tech Digest - September 20, 2025


Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer


Five forces shaping the next wave of quantum innovation

Quantum computers are expected to solve problems currently intractable for even the world’s fastest supercomputers. Their core strengths — efficiently finding hidden patterns in complex datasets and navigating vast optimization challenges — will enable the design of novel drugs and materials, the creation of superior financial algorithms and open new frontiers in cryptography and cybersecurity. ... The quantum ecosystem now largely agrees that simply scaling up today’s computers, which suffer from significant noise and errors that prevent fault-tolerant operation, won’t unlock the most valuable commercial applications. The industry’s focus has shifted to quantum error correction as the key to building robust and scalable fault-tolerant machines. ... Most early quantum computing companies tried a full-stack approach. Now that the industry is maturing, a rich ecosystem of middle-of-the-stack players has emerged. This evolution allows companies to focus on what they do best and buy components and capabilities as needed, such as control systems from Quantum Machines and quantum software development from firms ... recent innovations in quantum networking technology have made a scale-out approach a serious contender. 


Post-Modern Ransomware: When Exfiltration Replaces Encryption

Exfiltration-first attacks have re-written the rules, with stolen data providing criminals with a faster, more reliable payday than the complex mechanics of encryption ever could. The threat of leaking data like financial records, intellectual property, and customer and employee details delivers instant leverage. Unlike encryption, if the victim stands firm and refuses to pay up, criminal groups can always sell their digital loot on the dark web or use it to fuel more targeted attacks. ... Phishing emails, once known for being riddled with tell-tale grammar and spelling mistakes, are now polished, personalized and delivered in perfect English. AI-powered deepfake voices and videos are providing convincing impersonations of executives or trusted colleagues that have defrauded companies for millions. At the same time, attackers are deploying custom chatbots to manage ransom negotiations across multiple victims simultaneously, applying pressure with the relentless efficiency of machines. ... Yet resilience is not simply a matter of dashboards and detection thresholds – it is equally about supporting those on the frontlines. Security leaders already working punishing hours under relentless scrutiny cannot be expected to withstand endless fatigue and a culture of blame without consequence. Organizations must also embed support for their teams into their response frameworks, from clear lines of communication and decompression time to wellbeing checks. 


The Data Sovereignty Challenge: How CIOs Are Adapting in Real Time

The uncertainty is driving concern. “There's been a lot more talk around, ‘Should we be managing sovereign cloud, should we be using on-premises more, should we be relying on our non-North American public contractors?” said Tracy Woo, a principal analyst with researcher and advisory firm Forrester. Ditching a major public cloud provider over sovereignty concerns, however, is not a practical option. These providers often underpin expansive global workloads, so migrating to a new architecture would be time-consuming, costly, and complex. There also isn’t a simple direct switch that companies can make if they’re looking to avoid public cloud; sourcing alternatives must be done thoughtfully, not just in reaction to one challenge. ... “There's a nervousness around deployment of AI, and I think that nervousness comes from -- definitely in conversations with other CIOs -- not knowing the data,” said Bell. Although decoupling from the major cloud providers is impractical on many fronts, issues of sovereignty as well as cost could still push CIOs to embrace a more localized approach, Woo said. “People are realizing that we don't necessarily need all the bells and whistles of the public cloud providers, whether that's for latency or performance reasons, or whether it's for cost or whether that's for sovereignty reasons,” explained Woo. 


Enterprise AI enters the age of agency, but autonomy must be governed

Agentic AI systems don’t just predict or recommend, they act. These intelligent software agents operate with autonomy toward defined business goals, planning, learning, and executing across enterprise workflows. This is not the next version of traditional automation or static bots. It’s a fundamentally different operating paradigm, one that will shape the future of digital enterprises. ... For many enterprises, the last decade of AI investment has focused on surfacing insights: detecting fraud, forecasting demand, and predicting churn. These are valuable outcomes, but they still require humans or rigid automation to respond. Agentic AI closes that gap. These agents combine machine learning, contextual awareness, planning, and decision logic to take goal-directed action. They can process ambiguity, work across systems, resolve exceptions, and adapt over time. ... Agentic AI will not simply automate tasks. It will reshape how work is designed, measured, and managed. As autonomous agents take on operational responsibility, human teams will move toward supervision, exception resolution, and strategic oversight. New KPIs will emerge, not just around cost or cycle time, but around agent quality, business impact, and compliance resilience. This shift will also demand new talent models. Enterprises must upskill teams to manage AI systems, not just processes. 


Cybersecurity in smart cities under scrutiny

The digital transformation of public services involves “an accelerated convergence between IT and OT systems, as well as the massive incorporation of connected IoT devices,” she explains, which gives rise to challenges such as an expanding attack surface or the coexistence of obsolete infrastructure with modern ones, in addition to a lack of visibility and control over devices deployed by multiple providers. ... “According to the European Cyber ​​Security Organisation, 86% of European local governments with IoT deployments have suffered some security breach related to these devices,” she says. Accenture’s Domínguez adds that the challenge is to consider “the fragmentation of responsibilities between administrations, concessionaires, and third parties, which complicates cybersecurity governance and requires advanced coordination models.” De la Cuesta also emphasizes the siloed nature of project development, which significantly hinders the development of an active cybersecurity strategy. ... In the integration of new tools, despite Spain holding a leading position in areas such as 5G, “technology moves much faster than the government’s ability to react,” he says. “It’s not like a private company, which has a certain agility to make investments,” he explains. “Public administration is much slower. Budgets are different. Administrative procedures are extremely long. From the moment a project is first discussed until it is actually executed, many years pass.”


Your SDLC Has an Evil Twin — and AI Built It

Welcome to the shadow SDLC — the one your team built with AI when you weren't looking: It generates code, dependencies, configs, and even tests at machine speed, but without any of your governance, review processes, or security guardrails. ... It’s not just about insecure code sneaking into production, but rather about losing ownership of the very processes you’ve worked to streamline. Your “evil twin” SDLC comes with: Unknown provenance → You can’t always trace where AI-generated code or dependencies came from. Inconsistent reliability → AI may generate tests or configs that look fine but fail in production. Invisible vulnerabilities → Flaws that never hit a backlog because they bypass reviews entirely. ... AI assistants are now pulling in OSS dependencies you didn’t choose — sometimes outdated, sometimes insecure, sometimes flat-out malicious. While your team already uses hygiene tools like Dependabot or Renovate, they’re only table stakes that don’t provide governance. ... The “evil twin” of your SDLC isn’t going away. It’s already here, writing code, pulling dependencies, and shaping workflows. The question is whether you’ll treat it as an uncontrolled shadow pipeline — or bring it under the same governance and accountability as your human-led one. Because in today’s environment, you don’t just own the SDLC you designed. You also own the one AI is building — whether you control it or not.


'ShadowLeak' ChatGPT Attack Allows Hackers to Invisibly Steal Emails

Researchers at Radware realized the issue earlier this spring, when they figured out a way of stealing anything they wanted from Gmail users who integrate ChatGPT. Not only was their trick devilishly simple, but it left no trace on an end user's network — not even an iota of the suspicious Web traffic typical of data exfiltration attacks. As such, the user had no way of detecting the attack, let alone stopping it. ... To perform a ShadowLeak attack, attackers send an outwardly normal-looking email to their target. They surreptitiously embed code in the body of the message, in a format that the recipient will not notice — for example, in extremely tiny text, or white text on a white background. The code should be written in HTML, being standard for email and therefore less suspicious than other, more powerful languages would be. ... The malicious code can instruct the AI to communicate the contents of the victim's emails, or anything else the target has granted ChatGPT access to, to an attacker-controlled server. ... Organizations can try to compensate with their own security controls — for example, by vetting incoming emails with their own tools. However, Geenens points out, "You need something that is smarter than just the regular-expression engines and the state machines that we've built. Those will not work anymore, because there are an infinite number of permutations with which you can write an attack in natural language." 


UK: World’s first quantum computer built using standard silicon chips launched

This is reportedly the first quantum computer to be built using the standard complementary metal-oxide-semiconductor (CMOS) chip fabrication process which is the same transistor technology used in conventional computers. A key part of this approach is building cryoelectronics that connect qubits with control circuits that work at very low temperatures, making it possible to scale up quantum processors greatly. “This is quantum computing’s silicon moment,” James Palles‑Dimmock, Quantum Motion’s CEO, stated. ... In contrast to other quantum computing approaches, the startup used high-volume industrial 300 millimeter chipmaking processes from commercial foundries to produce qubits. The architecture, control stack, and manufacturing approach are all built to scale to host millions of qubits and pave the way for fault-tolerant, utility-scale, and commercially viable quantum computing. “With the delivery of this system, Quantum Motion is on track to bring commercially useful quantum computers to market this decade,” Hugo Saleh, Quantum Motion’s CEO and president, revealed. ... The system’s underlying QPU is built on a tile-based architecture, integrating all compute, readout, and control components into a dense, repeatable array. This design enables future expansion to millions of qubits per chip, with no changes to the system’s physical footprint.


Key strategies to reduce IT complexity

The cloud has multiplied the fragmentation of solutions within companies, expanding the number of environments, vendors, APIs, and integration approaches, which has raised the skill set, necessitated more complex governance, and prompted the emergence of cross-functional roles between IT and business. Cybersecurity also introduces further levels of complexity, introducing new platforms, monitoring tools, regulatory requirements, and risk management approaches that must be overseen by expert personnel. And then there’s shadow IT. With the ease of access to cloud technologies, it’s not uncommon for business units to independently activate services without involving IT, generating further risks. ... “Structured upskilling and reskilling programs are needed to prepare people to manage new technologies,” says Massara. “So is an organizational model capable of managing a growing number of projects, which can no longer be handled in a one-off manner. The approach to project management is changing because the project portfolio has expanded significantly, and a structured PMO is required, with project managers who often no longer reside solely in IT, but directly within the business.” ... While it’s true that an IT system with disparate systems leads to greater complexity, companies are still very cost-conscious and wary about heavily investing in unification right away. But as systems become obsolete, they become more harmonized.


Unshackling IT: Why Third-Party Support Is a Strategic Imperative, Especially for AI

One of the most compelling arguments for independent third-party support is its inherent vendor neutrality. When a company relies solely on a software vendor for support, that vendor naturally has a vested interest in promoting its latest upgrades, cloud migrations, and proprietary solutions. This can create a conflict of interest, potentially pushing customers towards expensive, unnecessary upgrades or discouraging them from exploring alternatives that might be a better fit for their unique needs. ... The recent acquisition of VMware by Broadcom provides a compelling and timely illustration of why third-party support is becoming increasingly critical. Following the merger, many VMware customers have expressed significant dissatisfaction with changes to licensing models, product roadmaps, and, crucially, support. Broadcom has been criticized for restructuring VMware’s offerings and reportedly reducing support for smaller customers, pushing them towards bundled, more expensive solutions. ... The shift towards third-party support isn’t just about cost savings; it’s about regaining control, accessing unbiased expertise, and ensuring business continuity in a rapidly changing technological landscape. For companies making critical decisions about AI integration and managing complex enterprise systems, providers like Spinnaker Support offer a strategic advantage.