Showing posts with label design pattern. Show all posts
Showing posts with label design pattern. Show all posts

Daily Tech Digest - April 21, 2026


Quote for the day:

“The first step toward success is taken when you refuse to be a captive of the environment in which you first find yourself.” -- Mark Caine


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 19 mins • Perfect for listening on the go.


Living off the Land attacks pose a pernicious threat for enterprises

"Living off the Land" (LOTL) attacks represent a sophisticated evolution in cybercraft where adversaries eschew traditional malware in favor of weaponizing an enterprise's own legitimate administrative tools. By exploiting native utilities like PowerShell, Windows Management Instrumentation, and various scripting frameworks, attackers can blend seamlessly into routine operational traffic, effectively hiding in plain sight. This stealthy approach allows threat actors—including advanced persistent groups like Salt Typhoon—to move laterally, escalate privileges, and exfiltrate data without triggering conventional signature-based security alerts. The article highlights that critical infrastructure and financial institutions are particularly vulnerable because they cannot simply disable these essential tools without disrupting vital services. To counter this pernicious threat, CIOs must pivot from reactive, perimeter-centric models toward strategies emphasizing behavioral context and intent. Effective defense requires a combination of rigorous tool hardening, such as enforcing signed scripts and least privilege access, alongside continuous monitoring that analyzes the timing and sequence of administrative actions. Furthermore, empowering security operations teams to engage in proactive threat hunting is essential for identifying the subtle patterns indicative of malicious activity. Ultimately, as attackers increasingly use the environment’s own rules against it, resilience depends on understanding normal operational behavior to distinguish legitimate management from stealthy, long-term intrusion.


UK firms are grappling with mismatched AI productivity gains – employees are more efficient

The Accenture "Generating Impact" report, as detailed by IT Pro, highlights a significant "productivity gap" where individual AI adoption is surging while organizational performance remains stagnant. Although nearly 18% of UK employees now utilize generative AI daily to improve their output quality and speed, only 10% of organizations have successfully scaled the technology into their core operations. This disconnect stems from a failure to redesign underlying workflows and systems; most companies are merely applying AI to isolated tasks rather than overhauling entire processes. Furthermore, a strategic mismatch exists between leadership and staff: while executives often prioritize cost reduction and short-term efficiency, workers are leveraging AI to enhance the value and creativity of their work. Looking ahead, the report identifies "agentic AI" as a potential breakthrough capable of augmenting 82% of working hours, yet 58% of executives admit their legacy IT infrastructure is unprepared for such advanced integration. To bridge this gap and unlock significant economic value, Accenture suggests that businesses must move beyond mere experimentation. Success requires a holistic "reinvention" strategy that integrates a robust digital core, comprehensive workforce reskilling, and a shift in focus toward long-term revenue growth rather than simple automation-driven savings.


The backup myth that is putting businesses at risk

The article "The Backup Myth That Is Putting Businesses at Risk" highlights a dangerous misconception: the belief that simply having data backups ensures business safety. While backups are essential for data preservation, they do not prevent the operational paralysis caused by system downtime. This distinction is critical because downtime is incredibly costly, with research from Oxford Economics suggesting it can cost businesses approximately $9,000 per minute. Traditional backup solutions often require hours or even days to fully restore systems, leading to significant financial losses and damaged customer reputations. To mitigate these risks, the article advocates for a comprehensive Business Continuity and Disaster Recovery (BCDR) strategy. Unlike basic backups, BCDR solutions facilitate rapid recovery—often within minutes—by utilizing virtualized environments and hybrid cloud architectures. This proactive approach combines local speed with cloud-based resilience, allowing operations to continue seamlessly while primary systems are repaired in the background. Ultimately, the article encourages organizations and Managed Service Providers (MSPs) to shift their focus from technical specifications to tangible business outcomes. By quantifying the financial impact of potential disruptions and prioritizing continuity over mere data storage, businesses can better protect their revenue, reputation, and long-term stability in an increasingly volatile digital landscape.


DPDP rules vs. employee AI usage: Are Indian companies prepared?

India's Digital Personal Data Protection (DPDP) Act emphasizes organizational accountability, consent, and strict control over personal data, yet many Indian companies face a compliance gap due to the rise of "shadow AI." Employees are organically adopting generative AI tools for productivity, often bypassing formal IT policies and creating invisible data risks. Since the DPDP Act holds organizations responsible for data processing, the use of external AI tools to handle sensitive information—without oversight—poses significant legal and reputational threats. Key challenges include a lack of visibility into data transfers, the absence of AI-specific governance frameworks, and reliance on consumer-grade tools that lack enterprise-level security. To address these vulnerabilities, leadership must shift from restrictive policies to proactive behavioral change. This involves implementing cloud-native architectures that centralize access control, providing sanctioned AI alternatives, and educating staff on purpose limitation. CFOs and CIOs must align to manage financial and operational risks, treating AI governance as essential digital hygiene rather than a future checkbox. Ultimately, true preparedness lies in establishing robust foundations that allow for innovation while ensuring strict adherence to evolving regulatory standards, thereby safeguarding against the potential for high penalties and data misuse in an increasingly AI-driven workplace.


Cloud Complexity: How To Simplify Without Sacrificing Speed

In the modern digital landscape, managing cloud complexity without compromising operational speed is a critical challenge for technology leaders. This Forbes Technology Council article outlines several strategic approaches to streamlining multicloud environments while maintaining agility. Central to these recommendations is the adoption of platform engineering, which emphasizes creating unified, self-service platforms with embedded guardrails and standardized templates. By leveraging automation and machine learning instead of static dashboards, organizations can enforce security and governance at scale, allowing developers to focus on innovation rather than infrastructure bottlenecks. Furthermore, experts suggest starting with simple Infrastructure as Code (IaC) to avoid overengineering and utilizing distributed databases with open APIs to abstract away underlying complexities. Stabilizing critical systems and resisting unnecessary upgrade cycles can also prevent self-inflicted chaos and operational disruption. Additionally, creating shared architectural foundations and clearly separating roles—specifically between explorers, builders, and operators—ensures that experimentation does not undermine stability. Ultimately, by standardizing on a unified platform layer and fostering a culture of machine-enforced discipline, enterprises can overcome the traditional trade-offs between speed and governance. This holistic approach allows teams to scale effectively, ensuring that infrastructure complexity serves as a foundation for innovation rather than a bottleneck to performance.


Compensation vs. Burnout: The New Retention Calculus for Cybersecurity Leaders

The 2026 Cybersecurity Talent Intelligence Report reveals a profession in turmoil, where only 34% of cybersecurity professionals plan to remain in their current roles. This mass turnover is primarily driven by escalating workloads and stagnant budgets, which have pushed job satisfaction to significant lows. While compensation remains a critical lever—with median salaries ranging from $113,000 for analysts to over $256,000 for functional leaders—the article emphasizes that financial rewards alone are no longer sufficient to ensure long-term retention. Organizations with higher revenues and public listings often provide a significant pay premium, yet even modest salary adjustments can notably increase employee loyalty across the board. However, the true "new calculus" for retention involves addressing the severe mental health strain and burnout affecting the industry, particularly for CISOs who shoulder immense emotional burdens. As artificial intelligence begins to reshape technical roles and productivity, business leaders must pivot from viewing burnout as a personal failing to recognizing it as a strategic organizational risk. Sustaining a resilient workforce now requires integrating formal wellness support, such as mandatory downtime and rotation-based on-call models, into core security programs to balance the intense pressures of preventing the unpreventable in a complex digital landscape.


AI-ready skills are not what you think

The Computerworld article "AI-ready skills are not what you think" highlights a fundamental shift in how enterprises approach workforce preparation for the artificial intelligence era. While early training programs prioritized technical maneuvers like prompt engineering and basic chatbot interactions, these tool-specific skills are quickly becoming obsolete as models evolve. Instead, true AI readiness is defined by durable human capabilities such as critical thinking, data literacy, and independent judgment. The core challenge is no longer teaching employees how to interact with AI, but rather how to supervise it. This includes output validation, systems thinking, and the ability to translate machine-generated insights into meaningful business actions. Crucially, as AI moves from experimental environments into high-stakes operational workflows involving regulatory risk or customer trust, human oversight becomes the primary safeguard. Experts emphasize that technical proficiency must be paired with "human edge" skills like problem framing and storytelling to remain effective. Furthermore, organizational success depends on leadership redefining accountability, ensuring that while AI accelerates analysis, humans remain responsible for final decisions and guardrails. Ultimately, the most valuable skills in an automated world are those that allow professionals to question, validate, and integrate AI outputs into complex business processes effectively and ethically.


Event-Driven Patterns for Cloud-Native Banking - What Works, What Hurts?

In this presentation, Sugu Sougoumarane explores the architectural patterns essential for building robust and reliable payment systems, drawing from his extensive experience in infrastructure engineering. The core challenge in payment processing is maintaining absolute data integrity and consistency across distributed systems where failure is inevitable. Sougoumarane emphasizes the critical role of idempotency, explaining how unique keys prevent duplicate transactions and ensure that retrying a failed operation does not result in double charging. He also discusses the importance of using finite state machines to manage the complex lifecycle of a payment, moving away from monolithic logic toward more manageable, discrete transitions. Furthermore, the session delves into the necessity of immutable ledgers for auditability and the "transactional outbox" pattern to ensure atomicity between database updates and external message queuing. By treating every payment as a formal state transition and prioritizing crash recovery over error prevention, developers can build systems that remain consistent even during network partitions or database outages. Ultimately, the presentation provides a blueprint for distributed consistency in financial contexts, advocating for decoupled services that rely on verifiable proofs of state rather than fragile, long-running distributed locks or manual intervention.


CISOs reshape their roles as business risk strategists

The role of the Chief Information Security Officer (CISO) is undergoing a fundamental transformation from a technical silo to a core business risk management function. Driven largely by the rapid integration of artificial intelligence, which intertwines security directly with operational processes, the modern CISO must now operate as a strategic partner rather than just a technologist. This shift requires moving beyond traditional metrics of application security to a language of enterprise-wide risk, involving financial impact, market growth, and competitive positioning. According to the article, the arrival of generative and agentic AI has made digital and business risks virtually synonymous, forcing security leaders to quantify how mitigation strategies align with overall corporate objectives. Consequently, corporate boards now expect CISOs to provide nuanced advice on whether to accept, transfer, or mitigate specific threats based on the organization’s unique risk tolerance. While many CISOs still struggle with this transition due to their technical engineering backgrounds, the new leadership profile demands proactive engagement with external peers and vendors to inform long-term strategy. Ultimately, the successful "business CISO" is one who moves from a reactive, fear-based compliance mindset to a strategic stance that actively accelerates growth while ensuring robust organizational resilience and stability.


Cloudflare wants to rebuild the network for the age of AI agents

Cloudflare is actively reshaping the global network to accommodate the rise of autonomous AI software through a series of infrastructure updates announced during its "Agents Week" event. Recognizing that traditional networking and security models—designed primarily for human interactive logins—often fail for ephemeral, autonomous processes, the company introduced Cloudflare Mesh. This private networking fabric provides AI agents with a shared private IP space and bidirectional reachability, replacing the manual friction of VPNs and multi-factor authentication with seamless, scoped access to private infrastructure. Beyond connectivity, Cloudflare is empowering agents with essential administrative capabilities, such as the new Registrar API for domain management and an integrated Email Service for outbound and inbound communications. To further support agentic workflows, the company launched "Agent Memory" to preserve conversation context and "Artifacts" for Git-compatible versioned storage. Additionally, a new Agent Readiness Index allows organizations to evaluate how effectively their web presence supports these non-human visitors. By integrating these services into its existing edge network, Cloudflare aims to treat AI agents as first-class citizens, creating a secure and highly scalable control plane that balances the performance needs of automated systems with the stringent security requirements of modern enterprise environments.

Daily Tech Digest - January 14, 2026


Quote for the day:

"To accomplish great things, we must not only act, but also dream, not only plan, but also believe." -- Anatole France



Outsmarting Data Center Outage Risks in 2026

Even the most advanced and well-managed facilities are not immune to disruptions. Recent incidents, such as outages at AWS, Cloudflare, and Microsoft Azure, serve as reminders that no data center can guarantee 100% uptime. This highlights the critical importance of taking proactive steps to mitigate data center outage risks, regardless of how reliable your facility appears to be. ... Overheating events can cause servers to shut down, leading to outages. To prevent an outage, you must detect and address excess heat issues proactively, before they become severe enough to trigger failures. A key consideration in this regard is to monitor data center temperatures granularly – meaning that instead of just deploying sensors that track the overall temperature of the server room, you monitor the temperatures of individual racks and servers. This is important because heat can accumulate in small areas, even if it remains normal across the data center. ... But from the perspective of data center uptime, physical security, which protects against physical attacks, is arguably a more important consideration. Whereas cybersecurity attacks typically target only a handful of servers or workloads, physical attacks can easily disable an entire data center. To this end, it’s critical to invest in multi-layered physical security controls – from the data center perimeter through to locks on individual server cabinets – to protect against intrusion. ... To mitigate outage risks, data center operators must take proactive steps to prevent fires from starting in the first place. 


Deploying AI agents is not your typical software launch - 7 lessons from the trenches

Across the industry, there is agreement that agents require new considerations beyond what we've become accustomed to in traditional software development. In the process, new lessons are being learned. Industry leaders shared some of their own lessons with ZDNET as they moved forward into an agentic AI future. ... Kale urges AI agent proponents to "grant autonomy in proportion to reversibility, not model confidence. Irreversible actions across multiple domains should always have human oversight, regardless of how confident the system appears." Observability is also key, said Kale. "Being able to see how a decision was reached matters as much as the decision itself." ... "AI works well when it has quality data underneath," said Oleg Danyliuk, CEO at Duanex, a marketing agency that built an agent to automate the validation of leads of visitors to its site. "In our example, in order to understand if the lead is interesting for us, we need to get as much data as we can, and the most complex is to get the social network's data, as it is mostly not accessible to scrape. That's why we had to implement several workarounds and get only the public part of the data." ... "AI agents do not succeed on model capability alone," said Martin Bufi, a principal research director at Info-Tech Research Group. His team designed and developed AI agent systems for enterprise-level functions, including financial analysis, compliance validation, and document processing. What helped these projects succeed was the employment of "AgentOps" (agent operations), which focuses on managing the entire agent lifecycle.


What enterprises think about quantum computing

Quantum computers’ qubits are incredibly fragile, so even setting or reading qubits has to be incredibly precise or it messes everything up. Environmental conditions can also mess things up, because qubits can get entangled with the things around them. Qubits can even leak away in the middle of something. So, here we have a technology that most people don’t understand and that is incredibly finicky, and we’re supposed to bet the business on it? How many enterprises would? None, according to the 352 who commented on the topic. How many think their companies will use it eventually? All of them—but they don’t know where or when, as an old song goes. And by the way, quantum theory is older than that song, and we still don’t have a handle on it. ... First and foremost, this isn’t the technology for general business applications. The quantum geeks emphasize that good quantum applications are where you have some incredibly complex algorithm, some math problem, that is simply not solvable using digital computers. Some suggest that it’s best to think of a quantum computer as a kind of analog computer. ... Even where quantum computing can augment digital, you’ll have to watch ROI according to the second point. The cost of quantum computing is currently prohibitive for most applications, even the stuff it’s good for, so you need find applications that have massive benefits, or think of some “quantum as a service” for solving an occasional complex problem.


Beyond the hype: 4 critical misconceptions derailing enterprise AI adoption

Leaders frequently assume AI adoption is purely technological when it represents a fundamental transformation that requires comprehensive change management, governance redesign and cultural evolution. The readiness illusion obscures human and organizational barriers that determine success. ... Leaders frequently assume AI can address every business challenge and guarantee immediate ROI, when empirical evidence demonstrates that AI delivers measurable value only in targeted, well-defined and precise use cases. This expectation reality gap contributes to pilot paralysis, in which companies undertake numerous AI experiments but struggle to scale any to production. ... Executives frequently claim their enterprise data is already clean or assume that collecting more data will ensure AI success — fundamentally misunderstanding that quality, stewardship and relevance matter exponentially more than raw quantity — and misunderstanding that the definition of clean data changes when AI is introduced. ... AI systems are probabilistic and require continuous lifecycle management. MIT research demonstrates manufacturing firms adopting AI frequently experience J-curve trajectories, where initial productivity declines but is then followed by longer-term gains. This is because AI deployment triggers organizational disruption requiring adjustment periods. Companies failing to anticipate this pattern abandon initiatives prematurely. The fallacy manifests in inadequate deployment management, including planning for model monitoring, retraining, governance and adaptation.


Inside the Growing Problem of Identity Sprawl

For years, identity governance relied on a set of assumptions tied closely to human behavior. Employees joined organizations, moved roles and eventually left. Even when access reviews lagged or controls were imperfect, identities persisted long enough to be corrected. That model no longer reflects reality. The difference between human and machine identities isn't just scale. "With human identities, if people are coming into your organizations as employees, you onboard them. They work, and by the time they leave, you can deprovision them," said Haider Iqbal ... "Organizations are using AI today, whether they know it or not, and most organizations don't even know that it's deployed in their environment," said Morey Haber, chief security advisor at BeyondTrust. That lack of awareness is not limited to AI. Many security teams struggle to maintain a reliable inventory of non-human identities, especially when those identities are created dynamically by automation or cloud services. Visibility gaps don't stop access from being granted, but they do prevent teams from confidently enforcing policy. "Without integration … I don't know what it's doing, and then I got to go figure it out. When you unify together, then you have all the AI visibility," Haber said, describing the operational impact of fragmented tooling. ... Modern enterprise environments rely on elevated access for cloud orchestration, application integration and automated workflows. Service accounts and application programming interfaces often require broad permissions to function reliably.


The Timeless Architecture: Enterprise Integration Patterns That Exceed Technology Trends

A strange reality is often encountered by enterprise technology leaders: everything seems to change, yet many things remain the same. New technologies emerge — from COBOL to Java to Python, from mainframes to the cloud — but the fundamental problems persist. Organizations still need to connect incompatible systems, convert data between different formats, maintain reliability when components fail, and scale to meet increasing demand. ... Synchronous request-response communication creates tight coupling and can lead to cascading failures. Asynchronous messaging has appeared across all eras — on mainframes via MQ, in SOA through ESB platforms, in cloud environments via managed messaging services such as SQS and Service Bus, and in modern event-streaming platforms like Kafka. ... A key architectural question is how to coordinate complex processes that span multiple systems. Two primary approaches exist. Orchestration relies on a centralized coordinator to control the workflow, while choreography allows systems to react to events in a decentralized manner. Both approaches existed during the mainframe era and remain relevant in microservices architectures today. Each has advantages: orchestration provides control and visibility, while choreography offers resilience and loose coupling. ... Organizations that treat security as a mere technical afterthought often accumulate significant technical debt. In contrast, enterprises that embed security patterns as foundational architectural elements are better equipped to adapt as technologies evolve.


From distributed monolith to composable architecture on AWS: A modern approach to scalable software

A distributed monolith is a system composed of multiple services or components, deployed independently but tightly coupled through synchronous dependencies such as direct API calls or shared databases. Unlike a true microservices architecture, where services are autonomous and loosely coupled, distributed monoliths share many pitfalls of monoliths ... Composable architecture embraces modularity and loose coupling by treating every component as an independent building block. The focus lies in business alignment and agility rather than just code decomposition. ... Start by analyzing the existing application to find natural business or functional boundaries. Use Domain-Driven Design to define bounded contexts that encapsulate specific business capabilities. ... Refactor the code into separate repositories or modules, each representing a bounded context or microservice. This clear separation supports independent deployment pipelines and ownership. ... Replace direct code or database calls with API calls or events. For example: Use REST or GraphQL APIs via API Gateway. Emit business events via EventBridge or SNS for asynchronous processing. Use SQS for message queuing to handle transient workloads. ... Assign each microservice its own DynamoDB table or data store. Avoid cross-service database joins or queries. Adopt a single-table design in DynamoDB to optimize data retrieval patterns within each service boundary. This approach improves scalability and performance at the data layer.


Firmware scanning time, cost, and where teams run EMBA

Security teams that deal with connected devices often end up running long firmware scans overnight, checking progress in the morning, and trying to explain to colleagues why a single image consumed a workday of compute time. That routine sets the context for a new research paper that examines how the EMBA firmware analysis tool behaves when it runs in different environments. ... Firmware scans often stretch into many hours, especially for medium and large images. The researchers tracked scan durations down to the second and repeated runs to measure consistency. Repeated executions on the same platform produced nearly identical run times and findings. That behavior matters for teams that depend on repeatable results during testing, validation, or research work. It also supports the use of EMBA in environments where scans need to be rerun with the same settings over time. The data also shows that firmware size alone does not explain scan duration. Internal structure, compression, and embedded components influenced how long individual modules ran. Some smaller images triggered lengthy analysis steps, especially during deep inspection stages. ... Nuray said cloud based EMBA deployments fit well into large scale scanning activity. He described cloud execution as a practical option for parallel analysis across many firmware images. Local systems, he added, support detailed investigation where teams need tight control over execution conditions and repeatability. 


'Most Severe AI Vulnerability to Date' Hits ServiceNow

Authentication issues in ServiceNow potentially opened the door for arbitrary attackers to gain full control over the entire platform and access to the various systems connected to it. ... Costello's first major discovery was that ServiceNow shipped the same credential to every third-party service that authenticated to the Virtual Agent application programming interface (API). It was a simple, obvious string — "servicenowexternalagent" — and it allowed him to connect to ServiceNow as legitimate third-party chat apps do. To do anything of significance with the Virtual Agent, though, he had to impersonate a specific user. Costello's second discovery, then, was quite convenient. He found that as far as ServiceNow was concerned, all a user needed to prove their identity was their email address — no password, let alone multifactor authentication (MFA), was required. ... An attacker could use this information to create tickets and manage workflows, but the stakes are now higher, because ServiceNow decided to upgrade its virtual agent: it can now also engage the platform's shiny new "Now Assist" agentic AI technology. ... "It's not just a compromise of the platform and what's in the platform — there may be data from other systems being put onto that platform," he notes, adding, "If you're any reasonably-sized organization, you are absolutely going to have ServiceNow hooked up to all kinds of other systems. So with this exploit, you can also then ... pivot around to Salesforce, or jump to Microsoft, or wherever."


Cybercrime Inc.: When hackers are better organized than IT

Cybercrime has transformed from isolated incidents into an organized industry. The large groups operate according to the same principles as international corporations. They have departments, processes, management levels, and KPIs. They develop software, maintain customer databases, and evaluate their success rates. ... Cybercrime now functions like a service chain. Anyone planning an attack today can purchase all the necessary components — from initial access credentials to leak management. Access brokers sell access to corporate networks. Botnet operators provide computing power for attacks. Developers deliver turnkey exploits tailored to known vulnerabilities. Communication specialists handle contact with the victims. ... What makes cybercrime so dangerous today is not just the technology itself, but the efficiency of its use. Attackers are flexible, networked, and eager to experiment. They test, discard, and improve — in cycles that are almost unimaginable in a corporate setting. Recruitment is handled like in startups. Job offers for developers, social engineers, or language specialists circulate in darknet forums. There are performance bonuses, training, and career paths. The work methods are agile, communication is decentralized, and financial motivation is clearly defined. ... Given this development, absolute security is unattainable. The crucial factor is the ability to quickly regain operational capability after an attack. Cyber ​​resilience describes this competence — not only to survive crises but also to learn from them.

Daily Tech Digest - October 22, 2025


Quote for the day:

"Good content isn't about good storytelling. It's about telling a true story well." -- Ann Handley



When yesterday’s code becomes today’s threat

A striking new supply chain attack is sending shockwaves through the developer community: a worm-style campaign dubbed “Shai-Hulud” has compromised at least 187 npm packages, including the tinycolor package that has 2 million hits weekly, and spreading to other maintainers' packages. The malicious payload modifies package manifests, injects malicious files, repackages, and republishes — thereby infecting downstream projects. This incident underscores a harsh reality: even code released weeks, months, or even years ago can become dangerous once a dependency in its chain has been compromised. ... Sign your code: All packages/releases should use cryptographic signing. This allows users to verify the origin and integrity of what they are installing. Verify signatures before use: When pulling in dependencies, CI/CD pipelines, and even local dev setups, include a step to check that the signature matches a trusted publisher and that the code wasn’t tampered with. SBOMs are your map of exposure: If you have a Software Bill of Materials for your project(s), you can query it for compromised packages. Find which versions/packages have been modified — even retroactively — so you can patch, remove, or isolate them. Continuous monitoring of risk posture: It's not enough to secure when you ship. You need alerts when any dependency or component’s risk changes: new vulnerabilities, suspicious behavior, misuse of credentials, or signs that a trusted package may have been modified after release.


Cloud Sovereignty: Feature. Bug. Feature. Repeat!

Cloud sovereignty isn’t just a buzzword anymore, argues Kushwaha. “It’s a real concern for businesses across the world. The pattern is clear. The cloud isn’t a one-size-fits-all solution anymore. Companies are starting to realise that sometimes control, cost, and compliance matter more than convenience.” ... Cloud sovereignty is increasingly critical due to the evolving geopolitical scenario, government and industry-specific regulations, and vendor lock-ins with heavy reliance on hyperscalers. The concept has gained momentum and will continue to do so because technology has become pervasive and critical for running a state/country and any misuse by foreign actors can cause major repercussions, the way Bavishi sees it. Prof. Bhatt captures that true digital sovereignty is a distant dream and achieving this requires a robust ecosystem for decades. This isn’t counterintuitive; it’s evolution, as Kushwaha epitomises. “The cloud’s original promise was one of freedom. Today, when it comes to the cloud, freedom means more control. Businesses investing heavily in digital futures can’t afford to ignore the fine print in hyperscaler contracts or the reach of foreign laws. Sovereignty is the foundation for building safely in a fragmented world.” ... Organisations have recognised the risks of digital dependencies and are looking for better options. There is no turning back, Karlitschek underlines.


Securing AI to Benefit from AI

As organizations begin to integrate AI into defensive workflows, identity security becomes the foundation for trust. Every model, script, or autonomous agent operating in a production environment now represents a new identity — one capable of accessing data, issuing commands, and influencing defensive outcomes. If those identities aren't properly governed, the tools meant to strengthen security can quietly become sources of risk. The emergence of Agentic AI systems make this especially important. These systems don't just analyze; they may act without human intervention. They triage alerts, enrich context, or trigger response playbooks under delegated authority from human operators. ... AI systems are capable of assisting human practitioners like an intern that never sleeps. However, it is critical for security teams to differentiate what to automate from what to augment. Some tasks benefit from full automation, especially those that are repeatable, measurable, and low-risk if an error occurs. ... Threat enrichment, log parsing, and alert deduplication are prime candidates for automation. These are data-heavy, pattern-driven processes where consistency outperforms creativity. By contrast, incident scoping, attribution, and response decisions rely on context that AI cannot fully grasp. Here, AI should assist by surfacing indicators, suggesting next steps, or summarizing findings while practitioners retain decision authority. Finding that balance requires maturity in process design. 


The Unkillable Threat: How Attackers Turned Blockchain Into Bulletproof Malware Infrastructure

When EtherHiding emerged in September 2023 as part of the CLEARFAKE campaign, it introduced a chilling reality: attackers no longer need vulnerable servers or hackable domains. They’ve found something far better—a global, decentralized infrastructure that literally cannot be shut down. ... When victims visit the infected page, the loader queries a smart contract on Ethereum or BNB Smart Chain using a read-only function call. ... Forget everything you know about disrupting cybercrime infrastructure. There is no command-and-control server to raid. No hosting provider to subpoena. No DNS to poison. The malicious code exists simultaneously everywhere and nowhere, distributed across thousands of blockchain nodes worldwide. As long as Ethereum or BNB Smart Chain operates—and they’re not going anywhere—the malware persists. Traditional law enforcement tactics, honed over decades of fighting cybercrime, suddenly encounter an immovable object. You cannot arrest a blockchain. You cannot seize a smart contract. You cannot compel a decentralized network to comply. ... The read-only nature of payload retrieval is perhaps the most insidious feature. When the loader queries the smart contract, it uses functions that don’t create transactions or blockchain records. 


New 'Markovian Thinking' technique unlocks a path to million-token AI reasoning

Researchers at Mila have proposed a new technique that makes large language models (LLMs) vastly more efficient when performing complex reasoning. Called Markovian Thinking, the approach allows LLMs to engage in lengthy reasoning without incurring the prohibitive computational costs that currently limit such tasks. The team’s implementation, an environment named Delethink, structures the reasoning chain into fixed-size chunks, breaking the scaling problem that plagues very long LLM responses. Initial estimates show that for a 1.5B parameter model, this method can cut the costs of training by more than two-thirds compared to standard approaches. ... The researchers compared this to models trained with the standard LongCoT-RL method. Their findings indicate that the model trained with Delethink could reason up to 24,000 tokens, and matched or surpassed a LongCoT model trained with the same 24,000-token budget on math benchmarks. On other tasks like coding and PhD-level questions, Delethink also matched or slightly beat its LongCoT counterpart. “Overall, these results indicate that Delethink uses its thinking tokens as effectively as LongCoT-RL with reduced compute,” the researchers write. The benefits become even more pronounced when scaling beyond the training budget. 


The dazzling appeal of the neoclouds

While their purpose-built design gives them an advantage for AI workloads, neoclouds also bring complexities and trade-offs. Enterprises need to understand where these platforms excel and plan how to integrate them most effectively into broader cloud strategies. Let’s explore why this buzzword demands your attention and how to stay ahead in this new era of cloud computing. ... Neoclouds, unburdened by the need to support everything, are outpacing hyperscalers in areas like agility, pricing, and speed of deployment for AI workloads. A shortage of GPUs and data center capacity also benefits neocloud providers, which are smaller and nimbler, allowing them to scale quickly and meet growing demand more effectively. This agility has made them increasingly attractive to AI researchers, startups, and enterprises transitioning to AI-powered technologies. ... Neoclouds are transforming cloud computing by offering purpose-built, cost-effective infrastructure for AI workloads. Their price advantages will challenge traditional cloud providers’ market share, reshape the industry, and change enterprise perceptions, fueled by their expected rapid growth. As enterprises find themselves at the crossroads of innovation and infrastructure, they must carefully assess how neoclouds can fit into their broader architectural strategies. 


Wi-Fi 8 is coming — and it’s going to make AI a lot faster

Unlike previous generations of Wi-Fi that competed on peak throughput numbers, Wi-Fi 8 prioritizes consistent performance under challenging conditions. The specification introduces coordinated multi-access point features, dynamic spectrum management, and hardware-accelerated telemetry designed for AI workloads at the network edge. ... A core part of the Wi-Fi 8 architecture is an approach known as Ultra High Reliability (UHR). This architectural philosophy targets the 99th percentile user experience rather than best-case scenarios. The innovation addresses AI application requirements that demand symmetric bandwidth, consistent sub-5-millisecond latency and reliable uplink performance. ... Wi-Fi 8 introduces Extended Long Range (ELR) mode specifically for IoT devices. This feature uses lower data rates with more robust coding to extend coverage. The tradeoff accepts reduced throughput for dramatically improved range. ELR operates by increasing symbol duration and using lower-order modulation. This improves the link budget for battery-powered sensors, smart home devices and outdoor IoT deployments. ... Wi-Fi 8 enhances roaming to maintain sub-millisecond handoff latency. The specification includes improved Fast Initial Link Setup (FILS) and introduces coordinated roaming decisions across the infrastructure. Access points share client context information before handoff. 


Life, death, and online identity: What happens to your online accounts after death?

Today, we lack the tools (protocols) and the regulations to enable digital estate management at scale. Law and regulation can force a change in behavior by large providers. However, lacking effective protocols to establish a mechanism to identify the decedent’s chosen individuals who will manage their digital estate, every service will have to design their own path. This creates an exceptional burden on individuals planning their digital estate, and on individuals who manage the digital estates of the deceased. ... When we set out to write this paper, we wanted to influence the large technology and social media platforms, politicians, regulators, estate planners, and others who can help change the status quo. Further, we hoped to influence standards development organizations, such as the OpenID Foundation and the Internet Engineering Task Force (IETF), and their members. As standards developers in the realm of identity, we have an obligation to the people we serve to consider identity from birth to death and beyond, to ensure every human receives the respect they deserve in life and in death. Additionally, we wrote the planning guide to help individuals plan for their own digital estate. By giving people the tools to help describe, document, and manage their digital estates proactively, we can raise more awareness and provide tools to help protect individuals at one of the most vulnerable moments of their lives.


5 steps to help CIOs land a board seat

Serving on a board isn’t an extension of an operational role. One issue CIOs face is not understanding the difference between executive management and governance, Stadolnik says. “They’re there to advise, not audit or lead the current company’s CIO,” he adds. In the boardroom, the mandate is to provide strategy, governance, and oversight, not execution. That shift, Stadolnik says, can be jarring for tech leaders who’ve spent their careers driving operational results. ... “There were some broad risk areas where having strong technical leadership was valuable, but it was hard for boards to carve out a full seat just for that, which is why having CIO-plus roles was very beneficial,” says Cullivan. The issue of access is another uphill battle for CIOs. As Payne found, the network effect can play a huge role in seeking a board role. But not every IT leader has the right kind of network that can open the door to these opportunities. ... Boards expect directors to bring scope across business disciplines and issues, not just depth in one functional area. Stadolnik encourages CIOs to utilize their strategic orientation, results focus, and collaborative and influence skills to set themselves up for additional responsibilities like procurement, supply chain, shared services, and others. “It’s those executive leadership capabilities that will unlock broader roles,” he says. Experience in those broader roles bolsters a CIO’s board résumé and credibility.


Microservices Without Meltdown: 7 Pragmatic Patterns That Stick

A good sniff test: can we describe the service’s job in one short sentence, and does a single team wake up if it misbehaves? If not, we’ve drawn mural art, not an interface. Start with a small handful of services you can name plainly—orders, payments, catalog—then pressure-test them with real flows. When a request spans three services just to answer a simple question, that’s a hint we’ve sliced too thin or coupled too often. ... Microservices live and die by their contracts. We like contracts that are explicit, versioned, and backwards-friendly. “Backwards-friendly” means old clients keep working for a while when we add fields or new behaviors. For HTTP APIs, OpenAPI plus consistent error formats makes a huge difference. ... We need timeouts and retries that fit our service behavior, or we’ll turn small hiccups into big outages. For east-west traffic, a service mesh or smart gateway helps us nudge traffic safely and set per-route policies. We’re fans of explicit settings instead of magical defaults. ... Each service owns its tables; cross-service read needs go through APIs or asynchronous replication. When a write spans multiple services, aim for a sequence of local commits with compensating actions instead of distributed locks. Yes, we’re describing sagas without the capes: do the smallest thing, record it durably, then trigger the next hop. 

Daily Tech Digest - October 21, 2025


Quote for the day:

"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone


The teacher is the new engineer: Inside the rise of AI enablement and PromptOps

Enterprises should onboard AI agents as deliberately as they onboard people — with job descriptions, training curricula, feedback loops and performance reviews. This is a cross-functional effort across data science, security, compliance, design, HR and the end users who will work with the system daily. ... Don’t let your AI’s first “training” be with real customers. Build high-fidelity sandboxes and stress-test tone, reasoning and edge cases — then evaluate with human graders. ... As onboarding matures, expect to see AI enablement managers and PromptOps specialists in more org charts, curating prompts, managing retrieval sources, running eval suites and coordinating cross-functional updates. Microsoft’s internal Copilot rollout points to this operational discipline: Centers of excellence, governance templates and executive-ready deployment playbooks. These practitioners are the “teachers” who keep AI aligned with fast-moving business goals. ... In a future where every employee has an AI teammate, the organizations that take onboarding seriously will move faster, safer and with greater purpose. Gen AI doesn’t just need data or compute; it needs guidance, goals, and growth plans. Treating AI systems as teachable, improvable and accountable team members turns hype into habitual value.


How CIOs Can Unlock Business Agility with Modular Cloud Architectures

A modular cloud architecture is one that makes a variety of discrete cloud services available on demand. The services are hosted across multiple cloud platforms, and different units within the business can pick and choose among specific services to meet their needs. ... At a high level, the main challenge stemming from a modular cloud architecture is that it adds complexity to an organization's cloud strategy. The more cloud services the CIO makes available, the harder it becomes to ensure that everyone is using them in a secure, efficient, cost-effective way. This is why a pivot toward a modular cloud strategy must be accompanied by governance and management practices that keep these challenges in check. ... As they work to ensure that the business can consume a wide selection of cloud services efficiently and securely, IT leaders may take inspiration from a practice known as platform engineering, which has grown in popularity in recent years. Platform engineering is the establishment of approved IT solutions that a business's internal users can access on a self-service basis, usually via a type of portal known as an internal developer platform. Historically, organizations have used platform engineering primarily to provide software developers with access to development tools and environments, not to manage cloud services. But the same sort of approach could help to streamline access to modular, composable cloud solutions.


8 platform engineering anti-patterns

Establishing a product mindset also helps drive improvement of the platform over time. “Start with a minimum viable platform to iterate and adapt based on feedback while also considering the need to measure the platform’s impact,” says Platform Engineering’s Galante. ... Top-down mandates for new technologies can easily turn off developers, especially when they alter existing workflows. Without the ability to contribute and iterate, the platform drifts from developer needs, prompting workarounds. ... “The feeling of being heard and understood is very important,” says Zohar Einy, CEO at Port, provider of a developer portal. “Users are more receptive to the portal once they know it’s been built after someone asked about their problems.” By performing user research and conducting developer surveys up front, platform engineers can discover the needs of all stakeholders and create platforms that mesh better with existing workflows and benefit productivity. ... Although platform engineering case studies from large companies, like Spotify, Expedia, or American Airlines, look impressive on paper, it doesn’t mean their strategies will transfer well to other organizations, especially those with mid-size or small-scale environments. ... Platform engineering requires more energy beyond a simple rebrand. “I’ve seen teams simply being renamed from operations or infrastructure teams to platform engineering teams, with very little change or benefit to the organization,” says Paula Kennedy


How Ransomware’s Data Theft Evolution is Rewriting Cyber Insurance Risk Models

Traditional cyber insurance risk models assume ransomware means encrypted files and brief business interruptions. The shift toward data theft creates complex claim scenarios that span multiple coverage lines and expose gaps in traditional policy structures. When attackers steal data rather than just encrypting it, the resulting claims can simultaneously trigger business interruption coverage, professional liability protection, regulatory defense coverage and crisis management. Each coverage line may have different limits, deductibles and exclusions, creating complicated interactions that claims adjusters struggle to parse. Modern business relationships are interconnected, which amplifies complications. A data breach at one organization can trigger liability claims from business partners, regulatory investigations across multiple jurisdictions, and contractual disputes with vendors and customers. Dependencies on third-party services create cascading exposures that traditional risk models fail to capture. ... The insurance implications are profound. Manual risk assessment processes cannot keep pace with the volume and sophistication of AI-enhanced attacks. Carriers still relying on traditional underwriting approaches face a fundamental mismatch of human-speed risk evaluation against machine-speed threat deployment.


Network security devices endanger orgs with ’90s era flaws“

Attackers are not trying to do the newest and greatest thing every single day,” watchTowr’s Harris explains. “They will do what works at scale. And we’ve now just seen that phishing has become objectively too expensive or too unsuccessful at scale to justify the time investment in deploying mailing infrastructure, getting domains and sender protocols in place, finding ways to bypass EDR, AV, sandboxes, mail filters, etc. It is now easier to find a 1990s-tier vulnerability in a border device where EDR typically isn’t deployed, exploit that, and then pivot from there.” ... “Identifying a command injection that is looking for a command string being passed to a system in some C or C++ code is not a terribly difficult thing to find,” Gross says. “But I think the trouble is understanding a really complicated appliance like these security network appliances. It’s not just like a single web application and that’s it.” This can also make it difficult for product developers themselves to understand the risks of a feature they add on one component if they don’t have a full understanding of the entire product architecture. ... Another problem? These appliances have a lot of legacy code, some that is 10 years or older. Plus, products and code bases inherited through acquisitions often means the developers who originally wrote the code might be long gone.


When everything’s connected, everything’s at risk

Treat OT changes as business changes (because they are). Involve plant managers, safety managers, and maintenance leadership in risk decisions. Be sure to test all changes in a development environment that adequately models the production environment where possible. Schedule changes during planned downtime with rollbacks ready. Build visibility passively with read-only collectors and protocol-aware monitoring to create asset and traffic maps without requiring PLC access. ... No one can predict the future. However, if the past is an indicator of the future, adversaries will continue to increasingly bypass devices and hijack cloud consoles, API tokens and remote management platforms to impact businesses on an industrial scale. Another area of risk is the firmware supply chain. Tiny devices often carry third-party code that we can’t easily patch. We’ll face more “patch by replacement” realities, where the only fix is swapping hardware. Additionally, machine identities at the edge, such as certificates and tokens, will outnumber humans by orders of magnitude. The lifecycle and privileges of those identities are the new perimeter. From a threat perspective, we will see an increasing number of ransomware attacks targeting physical disruption to increase leverage for the threat actors, as well as private 5G/smart facilities that, if misconfigured, propagate risk faster than any LAN ever has.


Software engineering foundations for the AI-native era

As developers begin composing software instead of coding line by line, they will need API-enabled composable components and services to stitch together. Software engineering leaders should begin by defining a goal to achieve a composable architecture that is based on modern multiexperience composable applications, APIs and loosely coupled API-first services. ... Software engineering leaders should support AI-ready data by organizing enterprise data assets for AI use. Generative AI is most useful when the LLM is paired with context-specific data. Platform engineering and internal developer portals provide the vehicles by which this data can be packaged, found and integrated by developers. The urgent demand for AI-ready data to support AI requires evolutionary changes to data management and upgrades to architecture, platforms, skills and processes. Critically, Model Context Protocol (MCP) needs to be considered. ... Software engineers can become risk-averse unless they are given the freedom, psychological safety and environment for risk taking and experimentation. Leaders must establish a culture of innovation where their teams are eager to experiment with AI technologies. This also applies in software product ownership, where experiments and innovation lead to greater optimization of the value delivered to customers.


What Does a 'Sovereign Cloud' Really Mean?

First, a sovereign cloud could be approached as a matter of procurement: Canada could shift its contract from US tech companies that currently dominate the approved list to non-American alternatives. At present, eight cloud service providers (CSPs) are approved for use by the Canadian government, seven of which are American. Accordingly, there is a clear opportunity to diversify procurement, particularly towards European CSPs, as suggested by the government’s ongoing discussions with France’s OVH Cloud. ... Second, a sovereign cloud could be defined as cloud infrastructure that is not only located in Canada and insulated from foreign legal access, but also owned by Canadian entities. Practically speaking, this would mean procuring services from domestic companies, a step the government has already taken with ThinkOn, the only non-American company CSP on the government’s approved list. ... Third, perhaps true cloud sovereignty might require more direct state intervention and a publicly built and maintained cloud. The Canadian government could develop in-house capacities for cloud computing and exercise the highest possible degree of control over government data. A dedicated Crown corporation could be established to serve the government’s cloud computing needs. ... No matter how we approach it, cloud sovereignty will be costly. 


Big Tech’s trust crisis: Why there is now the need for regulatory alignment

When companies deploy AI features primarily to establish market position rather than solve user problems, they create what might be termed ‘trust debt’ – a technical and social liability that compounds over time. This manifests in several ways, including degraded user experience, increased attack surfaces, and regulatory friction that ultimately impacts system performance and scalability. ... The emerging landscape of AI governance frameworks, from the EU AI Act to ISO 42001, shows an attempt to codify engineering best practices for managing algorithmic systems at scale. These standards address several technical realities, including bias in training data, security vulnerabilities in model inference, and intellectual property risks in data processing pipelines. Organisations implementing robust AI governance frameworks achieve regulatory compliance while adopting proven system design patterns that reduce operational risk. ... The technical implementation of trust requires embedding privacy and security considerations throughout the development lifecycle – what security engineers call ‘shifting left’ on governance. This approach treats regulatory compliance as architectural requirements that shape system design from inception. Companies that successfully integrate governance into their technical architecture find that compliance becomes a byproduct of good engineering practices which, over time, creates a series of sustainable competitive advantages.


The most sustainable data center is the one that’s already built: The business case for a ‘retrofit first’ mandate

From a sustainability standpoint, reusing and retrofitting legacy infrastructure is the single most impactful step our industry can take. Every megawatt of IT load that’s migrated into an existing site avoids the manufacturing, transport, and installation of new chillers, pumps, generators, piping, conduit, and switchgear and prevents the waste disposal associated with demolition. Sectors like healthcare, airports, and manufacturing have long proven that, with proper maintenance, mechanical and electrical systems can operate reliably for 30–50 years, and distribution piping can last a century. The data center industry – known for redundancy and resilience – can and should follow suit. The good news is that most data centers were built to last. ... When executed strategically, retrofits can reduce capital costs by 30–50 percent compared to greenfield construction, while accelerating time to market by months or even years. They also strengthen ESG reporting credibility, proving that sustainability and profitability can coexist. ... At the end of the day, I agree with Ms. Kass – the cleanest data center is the one that does not need to be built. For those that are already built, reusing and revitalizing the infrastructure we already have is not just a responsible environmental choice, it’s a sound business strategy that conserves capital, accelerates deployment, and aligns our industry’s growth with society’s expectations.

Daily Tech Digest - November 14, 2024

Where IT Consultancies Expect to Focus in 2025

“Much of what’s driving conversations around AI today is not just the technology itself, but the need for businesses to rethink how they use data to unlock new opportunities,” says Chaplin. “AI is part of this equation, but data remains the foundation that everything else builds upon.” West Monroe also sees a shift toward platform-enabled environments where software, data, and platforms converge. “Rather than creating everything from scratch, companies are focusing on selecting, configuring, and integrating the right platforms to drive value. The key challenge now is helping clients leverage the platforms they already have and making sure they can get the most out of them,” says Chaplin. “As a result, IT teams need to develop cross-functional skills that blend software development, platform integration and data management. This convergence of skills is where we see impact -- helping clients navigate the complexities of platform integration and optimization in a fast-evolving landscape.” ... “This isn’t just about implementing new technologies, it’s about preparing the workforce and the organization to operate in a world where AI plays a significant role. ...”


How Is AI Shaping the Future of the Data Pipeline?

AI’s role in the data pipeline begins with automation, especially in handling and processing raw data – a traditionally labor-intensive task. AI can automate workflows and allow data pipelines to adapt to new data formats with minimal human intervention. With this in mind, Harrisburg University is actively exploring AI-driven tools for data integration that leverage LLMs and machine learning models to enhance and optimize ETL processes, including web scraping, data cleaning, augmentation, code generation, mapping, and error handling. These adaptive pipelines, which automatically adjust to new data structures, allow companies to manage large and evolving datasets without the need for extensive manual coding. ... Beyond immediate operational improvements, AI is shaping the future of scalable and sustainable data pipelines. As industries collect data at an accelerating rate, traditional pipelines often struggle to keep pace. AI’s ability to scale data handling across various formats and volumes makes it ideal for supporting industries with massive data needs, such as retail, logistics, and telecommunications. In logistics, for example, AI-driven pipelines streamline inventory management and optimize route planning based on real-time traffic data. 


Innovating with Data Mesh and Data Governance

Companies choose a data mesh to overcome the limitations of “centralized and monolithic” data platforms, as noted by Zhamak Dehghani, the director of emerging technologies at Thoughtworks. Technologies like data lakes and warehouses try to consolidate all data in one place, but enterprises can find that the data gets stuck there. A company might have only one centralized data repository – typically a team such as IT – that serves the data up to everyone else in the company. This slows down data access because of bottlenecks. For example, having already taken days to get HR privacy approval, the finance department’s data access requests might then sit in the inbox of one or two people in IT for additional days. Instead, a data mesh puts data control in the hands of each domain that serves that data. Subject matter experts (SMEs) in the domain control how this data is organized, managed, and delivered. ... Data mesh with federated Data Governance balances expertise, flexibility, and speed with data product interoperability among different domains. With a data mesh, the people with the most knowledge about their subject matter take charge of their data. In the future, organizations will continue to face challenges in providing good, federated Data Governance to access data through a data mesh.


The Agile Manifesto was ahead of its time

A fundamental idea of the agile methodology is to alleviate this and allow for flexibility and changing requirements. The software development process should ebb and flow as features are developed and requirements change. The software should adapt quickly to these changes. That is the heart and soul of the whole Agile Manifesto. However, when the Agile Manifesto was conceived, the state of software development and software delivery technology was not flexible enough to fulfill what the manifesto was espousing. But this has changed with the advent of the SaaS (software as a service) model. It’s all well and good to want to maximize flexibility, but for many years, software had to be delivered all at once. Multiple features had to be coordinated to be ready for a single release date. Time had to be allocated for bug fixing. The limits of the technology forced software development teams to be disciplined, rigid, and inflexible. Delivery dates had to be met, after all. And once the software was delivered, changing it meant delivering all over again. Updates were often a cumbersome and arduous process. A Windows program of any complexity could be difficult to install and configure. Delivering or upgrading software at a site with 200 computers running Windows could be a major challenge.


Improving the Developer Experience by Deploying CI/CD in Databases

Characteristically less mature than CI/CD for application code, CI/CD for databases enables developers to manage schema updates such as changes to table structures and relationships. This management ability means developers can execute software updates to applications quickly and continuously without disrupting database users. It also helps improve quality and governance, creating a pipeline everyone follows. The CI stage typically involves developers working on code simultaneously, helping to fix bugs and address integration issues in the initial testing process. With the help of automation, businesses can move faster, with fewer dependencies and errors and greater accuracy — especially when backed up by automated testing and validation of database changes. Human intervention is not needed, resulting in fewer hours spent on change management. ... Deploying CI/CD for databases empowers developers to focus on what they do best: Building better applications. Businesses today should decide when, not if, they plan to implement these practices. For development leaders looking to start deploying CI/CD in databases, standardization — such as how certain things are named and organized — is a solid first step and can set the stage for automation in the future. 


To Dare or not to Dare: the MVA Dilemma

Business stakeholders must understand the benefits of technology experiments in terms they are familiar with, regarding how the technology will better satisfy customer needs. Operations stakeholders need to be satisfied that the technology is stable and supportable, or at least that stability and supportability are part of the criteria that will be used to evaluate the technology. Wholly avoiding technology experiments is usually a bad thing because it may miss opportunities to solve business problems in a better way, which can lead to solutions that are less effective than they would be otherwise. Over time, this can increase technical debt. ... These trade-offs are constrained by two simple truths: the development team doesn’t have much time to acquire and master new technologies, and they cannot put the business goals of the release at risk by adopting unproven or unsustainable technology. This often leads the team to stick with tried-and-true technologies, but this strategy also has risks, most notably those of the hammer-nail kind in which old technologies that are unsuited to novel problems are used anyway, as in the case where relational databases are used to store graph-like data structures.


2025 API Trend Reports: Avoid the Antipattern

Modern APIs aren’t all durable, full-featured products, and don’t need to be. If you’re taking multiple cross-functional agile sprints to design an API you’ll use for less than a year, you’re wasting resources building a system that will probably be overspecified and bloated. The alternative is to use tools and processes centered around an API developer’s unit of work, which is a single endpoint. No matter the scope or lifespan of an API, it will consist of endpoints, and each of those has to be written by a developer, one at a time. It’s another way that turning back to the fundamentals can help you adapt to new trends. ... Technology will keep evolving, and the way we employ AI might look quite different in a few years. Serverless architecture is the hot trend now, but something else will eventually overtake it. No doubt, cybercriminals will keep surprising us with new attacks. Trends evolve, but underlying fundamentals — like efficiency, the need for collaboration, the value of consistency and the need to adapt — will always be what drives business decisions. For the API industry, the key to keeping up with trends without sacrificing fundamentals is to take a developer-centric approach. Developers will always create the core value of your APIs. 


The targeted approach to cloud & data - CIOs' need for ROI gains

AI and DaaS are part of the pool of technologies that Pacetti also draws on, and the company also uses AI provided by Microsoft, both with ChatGPT and Copilot. Plus, AI has been integrated into the e-commerce site to support product research and recommendations. But there’s an even more essential area for Pacetti.“With the end of third-party cookies, AI is now essential to exploit the little data we can capture from the internet user browsing who accept tracking,” he says. “We use Google’s GA4 to compensate for missing analytics data, for example, by exploiting data from technical cookies.” ... CIOs discuss sales targets with CEOs and the board, cementing the IT and business bond. But another even more innovative aspect is to not only make IT a driver of revenues, but also have it measure IT with business indicators. This is a form of advanced convergence achieved by following specific methodologies. Sondrio People’s Bank (BPS), for example, adopted business relationship management, which deals with translating requests from operational functions to IT and, vice versa, bringing IT into operational functions. BPS also adopts proactive thinking, a risk-based framework for strategic alignment and compliance with business objectives. 


Hidden Threats Lurk in Outdated Java

How important are security updates? After all, Java is now nearly 30 years old; haven’t we eliminated all the vulnerabilities by now? Sadly not, and realistically, that will never happen. OpenJDK contains 7.5 million lines of code and relies on many external libraries, all of which can be subject to undiscovered vulnerabilities. ... Since Oracle changed its distributions and licensing, there have been 22 updates. Of these, six PSUs required a modification and new release to address a regression that had been introduced. The time to create the new update has varied from just under two weeks to over five weeks. At no time have any of the CPUs been affected like this. Access to a CPU is essential to maintain the maximum level of security for your applications. Since all free binary distributions of OpenJDK only provide the PSU version, some users may consider a couple of weeks before being able to deploy as an acceptable risk. ... When an update to the JDK is released, all vulnerabilities addressed are disclosed in the release notes. Bad actors now have information enabling them to try and find ways to exploit unpatched applications.


How to defend Microsoft networks from adversary-in-the-middle attacks

Depending on the impact of the attack, start the cleanup process. Start by forcing a password change on the user account, ensuring that you have revoked all tokens to block the attacker’s fake credentials. If the consequences of the attack were severe, consider disabling the user’s primary account and setting up a new temporary account as you investigate the extent of the intrusion. You may even consider quarantining the user’s devices and potentially taking forensic-level backups of workstations if you are unsure of the original source of the intrusion so you can best investigate. Next review all app registrations, changes to service principals, enterprise apps, and anything else the user may have changed or impacted since the time the intrusion was noted. You’ll want to do a deep investigation into the mailbox’s access and permissions. Mandiant has a PowerShell-based script that can assist you in investigating the impact of the intrusion “This repository contains a PowerShell module for detecting artifacts that may be indicators of UNC2452 and other threat actor activity,” Mandiant notes. “Some indicators are ‘high-fidelity’ indicators of compromise, while other artifacts are so-called ‘dual-use’ artifacts.”



Quote for the day:

"To think creatively, we must be able to look afresh to at what we normally take for granted." -- George Kneller