Showing posts with label financial services. Show all posts
Showing posts with label financial services. Show all posts

Daily Tech Digest - March 30, 2026


Quote for the day:

"Leaders who won't own failures become failures." -- Orrin Woodward


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 14 mins • Perfect for listening on the go.


A practical guide to controlling AI agent costs before they spiral

Managing the financial implications of AI agents is becoming a critical priority for IT leaders as these autonomous tools integrate into enterprise workflows. While software licensing fees are generally predictable, costs related to tokens, infrastructure, and management are often volatile due to the non-deterministic nature of AI. To prevent spending from exceeding the generated value, organizations must adopt a strategic framework that balances agent autonomy with fiscal oversight. Key recommendations include selecting flexible platforms that support various models and hosting environments, utilizing lower-cost LLMs for less complex tasks, and implementing automated cost-prediction tools. Furthermore, businesses should actively track real-time expenditures, optimize or repeat cost-effective workflows, and employ data caching to reduce redundant token consumption. Establishing hard token quotas can act as a safety net against runaway agents, while periodic reviews help curb agent sprawl similar to SaaS management practices. Ultimately, the goal is to leverage the transformative potential of agentic AI without allowing unpredictable operational expenses to spiral out of control. By prioritizing flexible architectures and robust monitoring early in the adoption phase, CIOs can ensure that their AI investments deliver measurable productivity gains rather than becoming a financial burden.


Teaching Programmers A Survival Mindset

The article "Teaching Programmers a 'Survival' Mindset," published by ACM, argues that the traditional educational focus on pure logic and "happy path" coding is no longer sufficient for the modern digital landscape. As software systems grow increasingly complex and interconnected, the author advocates for a pedagogical shift toward a "survival" or "adversarial" mindset. This approach prioritizes resilience, security, and the anticipation of failure over simple feature delivery. Instead of assuming a controlled environment where inputs are valid and dependencies are stable, programmers must learn to view their code through the lens of potential exploitation and systemic breakdown. The piece emphasizes that a survival mindset involves rigorous defensive programming, a deep understanding of the software supply chain, and the ability to navigate legacy environments where documentation may be scarce. By integrating these "survivalist" principles into computer science curricula and professional development, the industry can move away from fragile, high-maintenance builds toward robust systems capable of withstanding real-world pressures. Ultimately, the goal is to produce engineers who treat security and stability not as afterthoughts or separate departments, but as foundational elements of the craft, ensuring long-term viability in an increasingly volatile technological ecosystem.


For Financial Services, a Wake-Up Call for Reclaiming IAM Control

Part five of the "Repatriating IAM" series focuses on the strategic necessity of reclaiming Identity and Access Management (IAM) control within the financial services sector. The article argues that while SaaS-based identity solutions offer convenience, they often introduce unacceptable risks regarding operational resilience, regulatory compliance, and concentrated third-party dependencies. For financial institutions, identity is not merely an IT function but a core component of the financial control fabric, essential for enforcing segregation of duties and preventing fraud. By repatriating critical IAM functions—such as authorization decisioning, token services, and machine identity governance—closer to the actual workloads, organizations can achieve deterministic performance and forensic-grade auditability. The author highlights that "waiting out" a cloud provider’s outage is not a viable strategy when market hours and settlement windows are at stake. Instead, moving these high-risk workflows into controlled, hardened environments allows for superior telemetry and real-time responsiveness. Ultimately, the post positions IAM repatriation as a logical evolution for firms needing to balance AI-scale identity demands with the rigorous security and evidentiary standards required by global regulators, ensuring that no single external failure can paralyze essential banking operations or compromise sensitive customer data.


Practical Problem-Solving Approaches in Modern Software Testing

Modern software testing has evolved from a final development checkpoint into a continuous discipline characterized by proactive problem-solving and shared quality ownership. As software architectures grow increasingly complex, traditional testing models often prove inefficient, resulting in high defect costs and sluggish release cycles. To address these challenges, the article highlights four core approaches that prioritize speed, visibility, and accuracy. Shift-left testing embeds quality checks into the earliest design phases, significantly reducing production defect rates by catching requirements issues before they are ever coded. This proactive strategy is complemented by exploratory testing, which utilizes human intuition and AI-driven insights to uncover nuanced edge cases that automated scripts frequently overlook. Furthermore, risk-based testing allows teams to strategically allocate limited resources to high-impact system areas, while continuous testing within CI/CD pipelines provides near-instant feedback on every code change. By moving away from rigid, script-driven protocols toward these integrated methods, organizations can achieve faster feedback loops and lower overall maintenance costs. Ultimately, modern testing requires making failures visible and actionable in real time, transforming quality assurance from a siloed task into a collaborative foundation for reliable software delivery. This holistic strategy ensures that testing keeps pace with rapid development while meeting rising user expectations.


Data centers are war infrastructure now

The article "Data centers are war infrastructure now" explores the paradigm shift of digital hubs from silent commercial utilities to central pillars of national security and modern combat. As warfare becomes increasingly software-defined and data-driven, the facilities housing the world's processing power have transitioned into high-value strategic targets, comparable to energy grids and maritime ports. This evolution is driven by the "infrastructural entanglement" between sovereign states and private hyperscalers, where military operations, intelligence gathering, and essential government services are hosted on the same servers as civilian data. The physical vulnerability of this infrastructure is underscored by rising tensions in critical transit zones like the Red Sea, where undersea cables and landing stations have become active frontlines. Consequently, data centers are no longer viewed as mere business assets but as integral components of a nation's defense posture. This shift necessitates a new approach to physical security, cybersecurity, and international regulation, as the boundary between corporate interests and national sovereignty continues to blur. Ultimately, the piece highlights that in an era where information dominance determines victory, the data center has emerged as the most critical—and vulnerable—ammunition depot of the twenty-first century.


Why delivery drift shows up too late, and what I watch instead

In his article for CIO, James Grafton explores why critical project delivery issues often remain hidden until they escalate into full-blown crises. He argues that traditional governance and status reporting are structurally flawed because they prioritize "smoothed" expectations over the messy reality of execution. To move beyond deceptive "green" status reports, Grafton suggests monitoring three early-warning signals that reflect actual system behavior under load. First, he identifies "waiting work," where queues and stretching lead times signal that demand has outpaced capacity at key boundaries. Second, he highlights "rework," which indicates that implicit assumptions or communication gaps are forcing teams to backtrack. Finally, he points to "borrowed capacity," where temporary heroics and reprioritization quietly consume future resilience to protect current metrics. By shifting the governance conversation from performance justifications to identifying system strain, leaders can detect both "erosion"—visible, loud failures—and "ossification"—the quiet drift hidden behind outdated processes. This proactive approach allows organizations to bridge the gap between intent and delivery reality, preserving strategic options before failure becomes inevitable. By observing these behavioral trends rather than focusing on absolute values, CIOs can foster a safer environment for surfacing risks early and making deliberate, rather than reactive, interventions to ensure long-term stability.


Goodbye Software as a Service, Hello AI as a Service

The digital landscape is undergoing a profound transformation as Software as a Service (SaaS) begins to give way to AI as a Service (AIaaS), driven primarily by the emergence of Agentic AI. Unlike traditional SaaS models that rely on manual user navigation through dashboards and interfaces, AIaaS utilizes autonomous agents that execute workflows by directly calling systems and services. This shift transitions software from a primary workspace to an underlying capability, where the focus moves from user-driven inputs to autonomous orchestration. A critical development in this evolution is the rise of agent collaboration, facilitated by frameworks like the Model Context Protocol, which allow multiple agents to pass tasks and data across various platforms seamlessly. Consequently, the role of developers is evolving from building static integrations to designing and supervising agent behaviors within sophisticated governance frameworks. However, this increased autonomy introduces significant operational risks, including data exposure and complexity. Organizations must therefore prioritize robust infrastructure and clear guardrails to ensure accountability and traceability. Ultimately, while AI agents may replace human-driven manual processes, human oversight remains essential to manage decision-making and ensure that these autonomous systems operate within defined ethical and operational boundaries to drive long-term business value.


Scaling industrial AI is more a human than a technical challenge

Industrial AI has transitioned from experimental pilots to practical implementation, yet achieving mature, large-scale adoption remains an elusive goal for most organizations. While technical hurdles such as infrastructure gaps and cybersecurity risks are prevalent, the primary obstacle to scaling is inherently human rather than technological. The core challenge lies in bridging the historical divide between information technology (IT) and operational technology (OT) departments. These two disciplines must operate as a cohesive team to succeed, but many organizations still suffer from siloed structures where nearly half report minimal cooperation. True progress requires a shift from individual convergence to organizational collaboration, where IT experts and OT specialists align their distinct competencies toward shared goals like safety, uptime, and resilience. By fostering trust and establishing clear lines of accountability, leaders can navigate the complexities of AI-driven operations more effectively. Organizations that successfully dismantle these departmental barriers report higher confidence, stronger security postures, and a more ready workforce. Ultimately, the future of industrial AI depends on the ability to forge connected teams that blend digital agility with operational rigor, transforming isolated technological promises into sustained, everyday impact across manufacturing, transportation, and utility sectors.
 

Building Consumer Trust with IoT

The Internet of Things (IoT) is revolutionizing modern life, with projections suggesting a global value of up to $12.5 trillion by 2030 through innovations like smart cities and environmental monitoring. However, this digital transformation faces a critical hurdle: establishing and maintaining consumer trust. Central to this challenge are ethical concerns surrounding data privacy and security vulnerabilities, as devices often collect sensitive personal information susceptible to cyber threats like DDoS attacks. To foster confidence, organizations must implement transparent data usage policies and proactive security measures, such as real-time traffic monitoring, while adhering to regulatory standards like GDPR. Beyond digital security, the article emphasizes the environmental toll of IoT, noting that energy consumption and electronic waste necessitate a "green IoT" approach characterized by sustainable product design. Achieving a trustworthy ecosystem requires a collective commitment to global best practices, including the adoption of IPv6 for scalable connectivity and engagement with open technical communities like RIPE. By integrating ethical considerations throughout a project's lifecycle, developers can ensure that IoT serves the broader well-being of society and the planet. This holistic approach, combining robust security with environmental responsibility and regulatory compliance, is essential for unlocking the full potential of an interconnected world.


Why risk alone doesn’t get you to yes

The article by Chuck Randolph emphasizes that the greatest challenge for security leaders isn't identifying threats, but securing executive buy-in to act upon them. While technical briefs may clearly outline risks, they often fail to compel action because they are not translated into the language of business accountability, such as revenue flow and operational stability. To bridge this gap, security professionals must pivot from presenting dense technical metrics to highlighting tangible business consequences, like manufacturing shutdowns or lost contracts. Randolph notes that effective leaders address objections upfront, align security initiatives with shared strategic outcomes rather than departmental needs, and replace vague warnings with precise, actionable requests. By connecting technical vulnerabilities to "business math"—associating risk with specific financial liabilities—security experts can engage stakeholders like CFOs and COOs more effectively. Ultimately, the piece argues that security leadership is defined by the ability to influence organizational movement through better translation rather than just more data. Influence transforms information into action, ensuring that identified risks are not merely acknowledged but actively mitigated. This strategic shift in communication is essential for protecting the enterprise and achieving a "yes" from decision-makers who prioritize long-term value.

Daily Tech Digest - February 04, 2026


Quote for the day:

"The struggle you're in today is developing the strength you need for tomorrow." -- Elizabeth McCormick



A deep technical dive into going fully passwordless in hybrid enterprise environments

Before we can talk about passwordless authentication, we need to address what I call the “prerequisite triangle”: cloud Kerberos trust, device registration and Conditional Access policies. Skip any one of these, and your migration will stall before it gains momentum. ... Once your prerequisites are in place, you face critical architectural decisions that will shape your deployment for years to come. The primary decision point is whether to use Windows Hello for Business, FIDO2 security keys or phone sign-in as your primary authentication mechanism. ... The architectural decision also includes determining how you handle legacy applications that still require passwords. Your options are limited: implement a passwordless-compatible application gateway, deprecate the application entirely or use Entra ID’s smart lockout and password protection features to reduce risk while you transition. ... Start with a pilot group — I recommend between 50 and 200 users who are willing to accept some friction in exchange for security improvements. This group should include IT staff and security-conscious users who can provide meaningful feedback without becoming frustrated with early-stage issues. ... Recovery mechanisms deserve special attention. What happens when a user’s device is stolen? What if the TPM fails? What if they forget their PIN and can’t reach your self-service portal? Document these scenarios and test them with your help desk before full rollout. 


When Cloud Outages Ripple Across the Internet

For consumers, these outages are often experienced as an inconvenience, such as being unable to order food, stream content, or access online services. For businesses, however, the impact is far more severe. When an airline’s booking system goes offline, lost availability translates directly into lost revenue, reputational damage, and operational disruption. These incidents highlight that cloud outages affect far more than compute or networking. One of the most critical and impactful areas is identity. When authentication and authorization are disrupted, the result is not just downtime; it is a core operational and security incident. ... Cloud providers are not identity systems. But modern identity architectures are deeply dependent on cloud-hosted infrastructure and shared services. Even when an authentication service itself remains functional, failures elsewhere in the dependency chain can render identity flows unusable. ... High availability is widely implemented and absolutely necessary, but it is often insufficient for identity systems. Most high-availability designs focus on regional failover: a primary deployment in one region with a secondary in another. If one region fails, traffic shifts to the backup. This approach breaks down when failures affect shared or global services. If identity systems in multiple regions depend on the same cloud control plane, DNS provider, or managed database service, regional failover provides little protection. In these scenarios, the backup system fails for the same reasons as the primary.


The Art of Lean Governance: Elevating Reconciliation to Primary Control for Data Risk

In today's environment comprising of continuous data ecosystems, governance based on periodic inspection is misaligned with how data risk emerges. The central question for boards, regulators, auditors, and risk committees has shifted: Can the institution demonstrate at the moment data is used that it is accurate, complete, and controlled? Lean governance answers this question by elevating data reconciliation from a back-office cleanup activity to the primary control mechanism for data risk reduction. ... Data profiling can tell you that a value looks unusual within one system. It cannot tell you whether that value aligns with upstream sources, downstream consumers, or parallel representations elsewhere in the enterprise.  ... Lean governance reframes governance as a continual process-control discipline rather than a documentation exercise. It borrows from established control theory: Quality is achieved by controlling the process, not by inspecting outputs after failures. Three principles define this approach: Data risk emerges continuously, not periodically; Controls must operate at the same cadence as data movement; and Reconciliation is the control that proves process integrity. ... Data profiling is inherently inward-looking. It evaluates distributions, ranges, patterns, and anomalies within a single dataset. This is useful for hygiene, but insufficient for assessing risk. Reconciliation is inherently relational. It validates consistency between systems, across transformations, and through the lifecycle of data.


Working with Code Assistants: The Skeleton Architecture

Critical non-functional requirements- such as security, scalability, performance, and authentication- are system-wide invariants that cannot be fragmented. If every vertical slice is tasked with implementing its own authorization stack or caching strategy, the result is "Governance Drift": inconsistent security postures and massive code redundancy. This necessitates a new unifying concept: The Skeleton and The Tissue. ... The Stable Skeleton represents the rigid, immutable structures (Abstract Base Classes, Interfaces, Security Contexts) defined by the human although possibly built by the AI. The Vertical Tissue consists of the isolated, implementation-heavy features (Concrete Classes, Business Logic) generated by the AI. This architecture draws on two classical approaches: actor models and object-oriented inversion of control. It is no surprise that some of the world’s most reliable software is written in Erlang, which utilizes actor models to maintain system stability. Similarly, in inversion of control structures, the interaction between slices is managed by abstract base classes, ensuring that concrete implementation classes depend on stable abstractions rather than the other way around. ... Prompts are soft; architecture is hard. Consequently, the developer must monitor the agent with extreme vigilance. ... To make the "Director" role scalable, we must establish "Hard Guardrails"- constraints baked into the system that are physically difficult for the AI to bypass. These act as the immutable laws of the application.


8-Minute Access: AI Accelerates Breach of AWS Environment

A threat actor gained initial access to the environment via credentials discovered in public Simple Storage Service (S3) buckets and then quickly escalated privileges during the attack, which moved laterally across 19 unique AWS principals, the Sysdig Threat Research Team (TRT) revealed in a report published Tuesday. ... While the speed and apparent use of AI were among the most notable aspects of the attack, the researchers also called out the way that the attacker accessed exposed credentials as a cautionary tale for organizations with cloud environments. Indeed, stolen credentials are often an attacker's initial access point to attack a cloud environment. "Leaving access keys in public buckets is a huge mistake," the researchers wrote. "Organizations should prefer IAM roles instead, which use temporary credentials. If they really want to leverage IAM users with long-term credentials, they should secure them and implement a periodic rotation." Moreover, the affected S3 buckets were named using common AI tool naming conventions, they noted. The attackers actively searched for these conventions during reconnaissance, enabling them to find the credentials quite easily, they said. ... During this privilege-escalation part of the attack — which took a mere eight minutes — the actor wrote code in Serbian, suggesting their origin. Moreover, the use of comments, comprehensive exception handling, and the speed at which the script was written "strongly suggests LLM generation," the researchers wrote.


Ask the Experts: The cloud cost reckoning

According to the 2025 Azul CIO Cloud Trends Survey & Report, 83% of the 300 CIOs surveyed are spending an average of 30% more than what they had anticipated for cloud infrastructure and applications; 43% said their CEOs or boards of directors had concerns about cloud spend. Moreover, 13% of surveyed CIOs said their infrastructure and application costs increased with their cloud deployments, and 7% said they saw no savings at all. Other surveys show CIOs are rethinking their cloud strategies, with "repatriation" -- moving workloads from the cloud back to on-premises -- emerging as a viable option due to mounting costs. ... "At Laserfiche we still have a hybrid environment. So we still have a colocation facility, where we house a lot of our compute equipment. And of course, because of that, we need a DR site because you never want to put all your eggs in that one colo. We also have a lot of SaaS services. We're in a hyperscaler environment for Laserfiche cloud. "But the reason why we do both is because it actually costs us less money to run our own compute in a data center colo environment than it does to be all in on cloud." ,,, "The primary reason why the [cloud] costs have been increasing is because our use of cloud services has become much more sophisticated and much more integrated. "But another reason cloud consumption has increased is we're not as diligent in managing our cloud resources in provisioning and maintaining."


NIST develops playbook for online use cases of digital credentials in financial services

The objective is to develop what a panel description calls a “playbook of standards and best practices that all parties can use to set a high bar for privacy and security.” “We really wanted to be able to understand, what does it actually take for an organization to implement this stuff? How does it fit into workflows? And then start to think as well about what are the benefits to these organizations and to individuals.” “The question became, what was the best online use case?” Galuzzo says. “At which point our colleagues in Treasury kind of said, hey, our online banking customer identification program, how do we make that both more usable and more secure at the same time? And it seemed like a really nice fit. So that brought us to both the kind of scope of what we’re focused on, those online components, and the specific use case of financial services as well.” ... The model, he says, “should allow you to engage remotely, to not have to worry about showing up in person to your closest branch, should allow for a reduction in human error from our side and should allow for reduction in fraud and concern over forged documents.” It should also serve to fulfil the bank’s KYC and related compliance requirements. Beyond the bank, the major objective with mDLs remains getting people to use them. The AAMVA’s Maru points to his agency’s digital trust service, and to its efforts in outreach and education – which are as important in driving adoption as anything on the technical side. 


Designing for the unknown: How flexibility is reshaping data center design

Rapid advances in compute architectures – particularly GPUs and AI-oriented systems – are compressing technology cycles faster than many design and delivery processes can respond. In response, flexibility has shifted from a desirable feature to the core principle of successful data center design. This evolution is reshaping how we think about structure, power distribution, equipment procurement, spatial layout, and long-term operability. ... From a design perspective, this means planning for change across several layers: Structural systems that can accommodate higher equipment loads without reinforcement; Spatial layouts that allow reconfiguration of white space and service zones; and Distribution pathways that support future modifications without disrupting live operations. The objective is not to overbuild for every possible scenario, but to provide a framework that can absorb change efficiently and economically. ... Another emerging challenge is equipment lead time. While delivery periods vary by system, generators can now carry lead times approaching 12 months, particularly for higher capacities, while other major infrastructure components – including transformers, UPS modules, and switchgear – typically fall within the 30- to 40-week range. Delays in securing these items can introduce significant risk when procurement decisions are deferred until late in the design cycle.


Onboarding new AI hires calls for context engineering - here's your 3-step action plan

In the AI world, the institutional knowledge is called context. AI agents are the new rockstar employees. You can onboard them in minutes, not months. And the more context that you can provide them with, the better they can perform. Now, when you hear reports that AI agents perform better when they have accurate data, think more broadly than customer data. The data that AI needs to do the job effectively also includes the data that describes the institutional knowledge: context. ... Your employees are good at interpreting it and filling in the gaps using their judgment and applying institutional knowledge. AI agents can now parse unstructured data, but are not as good at applying judgment when there are conflicts, nuances, ambiguity, or omissions. This is why we get hallucinations. ... The process maps provide visibility into manual activities between applications or within applications. The accuracy and completeness of the documented process diagrams vary wildly. Front-office processes are generally very poor. Back-office processes in regulated industries are typically very good. And to exploit the power of AI agents, organizations need to streamline them and optimize their business processes. This has sparked a process reengineering revolution that mirrors the one in the 1990s. This time around, the level of detail required by AI agents is higher than for humans.


Q&A: How Can Trust be Built in Open Source Security?

The security industry has already seen examples in 2025 of bad actors deploying AI in cyberattacks – I’m concerned that 2026 could bring a Heartbleed- or Log4Shell-style incident involving AI. The pace at which these tools operate may outstrip the ability of defenders to keep up in real time. Another focus for the year ahead: how the Cyber Resilience Act (CRA) will begin to reshape global compliance expectations. Starting in September 2026, manufacturers and open source maintainers must report exploited vulnerabilities and breaches to the EU. This is another step closer to CRA enforcement and other countries like Japan, India and Korea are exploring similar legislation. ... The human side of security should really be addressed just as urgently as the technical side. The way forward involves education, tooling and cultural change. Resilient human defences start with education. Courses from the Linux Foundation like Developing Secure Software and Secure AI/ML‑Driven Software Development equip users with the mindset and skills to make better decisions in an AI‑enhanced world. Beyond formal training, reinforcing awareness creating a vigilant community is critical. The goal is to embed security into culture and processes so that it’s not easily overlooked when new technology or tools roll around. ... Maintainers and the community projects they lead are struggling without support from those that use their software.

Daily Tech Digest - November 05, 2025


Quote for the day:

"Effective leaders know that resources are never the problem; it's always a matter of resourfulness." -- Tony Robbins



AI web browsers are cool, helpful, and utterly untrustworthy

AI browsers can and do interact with everything on a web page: summarizing content, reading emails, composing posts, looking at images, etc., etc. Every element on the page, whether you can see it or not, can hide an attack. A hacker can embed clipboard manipulations or other hacks that traditional browsers would never, not ever, execute automatically. ... AI browser agents can be tricked by hidden instructions embedded in websites via invisible text, images, scripts, or, believe it or not, bad grammar. Your eyes might glaze over at a long run-on sentence, but your AI web browser will read it all, including instructions for an attack hidden in plain sight within it. Such malicious commands are read and executed by the AI. This can lead to exposure of sensitive data, such as emails, authentication tokens, and login details, or triggering unwanted actions, including sending emails, posting to social media, or giving your computer a bad case of malware. ... Privacy is pretty much lost these days anyway, but with AI web browsers, we’ll have all the privacy of a goldfish in a bowl. Since AI browsers monitor our every last move, they process much more granular personal information than conventional browsers. Worrying about cookies and privacy is so 1990s. AI browsers track everything. This is then used to create highly detailed behavioral profiles. What? You didn’t know that AI browsers have built-in memory functions that retain your interactions, browser history, and content from other apps? How do you think they do what they do? Intuition? ESP?


AI can flag the risk, but only humans can close the loop

Companies embedding AI into vendor risk processes need governance structures that ensure transparency, accountability, and compliance. This includes maintaining an approved sources catalogue and requiring either the system or an analyst to validate findings and document the rationale behind them. Data minimization should be built into the design by defining what information is always in scope, such as sanctions or embargo lists, and what is contextually relevant, while excluding protected or sensitive attributes under GDPR and configuring AI to ignore them. Risk assessments should be tiered, calibrating the depth of checks to supplier criticality and geography to avoid unnecessary data collection for low-risk relationships while expanding scope for high-risk scenarios. Human accountability remains essential, with a named individual owning due diligence decisions while AI provides recommendations without replacing human judgment ... Regulators are likely to allow AI use if firms establish strong controls and demonstrate effective oversight, as required by frameworks like the EU AI Act. Responsibility remains with individuals or organizations; liability does not transfer to AI itself. While regulators may struggle to specify detailed technical rules, one clear shift is that “the data volume was too large to review” will no longer be an acceptable defense.


10 top devops practices no one is talking about

“A key, yet overlooked, devops practice is building true shared ownership, which means more than just putting teams in the same chat room,” says Chris Hendrich, associate CTO of AppMod at SADA. “It requires making production reliability and performance a primary success indicator for development, not solely an operational concern. This shared accountability is what builds the organizational competency of creating better, more resilient products.” ... “Baking an integrated code quality and code security approach into your devops workflow isn’t just good practice, it’s essential and a game-changer,” says Donald Fischer, VP at Sonar. “Tackling security alongside quality from day one isn’t merely about early bug detection; it’s about building fundamentally stronger, more trustworthy, and resilient software that is secure by design.” ... “Open source is a no-brainer for developers, but as the ecosystem grows, so do the risks of malware, unsafe AI models, license issues, outdated packages, poor performance, and missing features,” says Mitchell Johnson, CPDO of Sonatype. “Modern devops teams need visibility into what’s getting pulled in, not just to stay secure and compliant, but to make sure they’re building with high-quality components.” ... “Version-controlling database schemas and configurations across development, QA, and production is a quietly powerful devops practice,” says McMillan. 


Cloud Identity Exposure Is 'a Critical Point of Failure'

Attackers keep targeting cloud-based identities to help them bypass endpoint and network defenses, says an August report from cybersecurity firm CrowdStrike. That report counts a 136% increase in cloud intrusions over the preceding 12 months, plus a 40% year-on-year increase in cloud intrusions tied to threat actors likely working for the Chinese government. "The cloud is a priority target for both criminals and nation-state threat actors," said Adam Meyers, head of counter adversary operations at CrowdStrike ... One challenge is that enough cloud identities justify elevated permissions, putting organizations at elevated risk when their credentials are exposed. Take security operations centers and incident response teams. In general, while "the principle of least privilege and minimal manual access" is a best practice, first responders often need immediate and "necessary access," says an August report from Darktrace. "Security teams need access to logs, snapshots and configuration data to understand how an attack unfolded, but giving blanket access opens the door to insider threats, misconfigurations and lateral movement." Rather than always allowing such access, experts recommend using tools that only provide it when needed, for example, through Amazon Web Services' Security Token Service. "Leveraging temporary credentials, such as AWS STS tokens, allows for just-in-time access during an investigation" that can be automatically revoked after, which "reduces the window of opportunity for potential attackers to exploit elevated permissions," Darktrace said.


How Software Development Teams Can Securely and Ethically Deploy AI Tools

Clearly, there is a danger that teams will trust AI too much, as these tools lack a command of the often nuanced context to recognize complex vulnerabilities. They may not fully grasp an application’s authentication or authorization framework, potentially leading to the omission of critical checks. If developers reach a state of complacency in their vigilance, the potential for such risks will only increase. ... Beyond security, team leaders and members must focus more on ethical and even legal considerations: Nearly one-half of software engineers are facing legal, compliance and ethical challenges in deploying AI, according the The AI Impact Report 2025 from LeadDev. The ethical/legal scenarios can take on a highly perplexing nature: A human engineer can read, learn from and write original code from an open-source library. But if an LLM does the same thing, it can be accused of engaging in derivative practices. What’s more, the current legal picture is a murky work in progress. Given the still-evolving judicial conclusions and guidelines, those using third-party AI tools need to ensure they are properly indemnified from potential copyright infringement liability, according to Ropes & Gray, a global law firm that advises clients on intellectual property and data matters. “Risk allocation in contracts concerning or contemplating AI models should be approached very carefully,” according to the firm.


How AI is Revolutionising RegTech and Compliance

Traditional approaches are failing, overwhelmed by increasing regulatory complexity and cross-border requirements. Enter RegTech: a technological revolution transforming how institutions manage regulatory obligations. Advanced artificial intelligence systems now predict compliance breaches weeks before they occur, while blockchain platforms create tamper-proof audit trails that streamline regulatory examinations. ... Natural language processing interprets complex regulatory documents automatically, updating compliance procedures within minutes of regulatory changes. Smart contracts execute compliance actions without human intervention, ensuring consistent adherence to evolving requirements. Leading institutions are achieving remarkable results. Barclays reduced regulatory document processing time from days to minutes using AI-powered analysis. JPMorgan's blockchain settlement system maintains compliance across multiple jurisdictions simultaneously. ... Regulatory-as-a-Service models are democratising access to sophisticated compliance capabilities. Smaller institutions can now access enterprise-grade RegTech through subscription services, reducing compliance costs by up to 50% whilst improving regulatory coverage. Challenges remain significant. Data privacy concerns intensify as compliance systems process vast quantities of sensitive information. Regulatory fragmentation across jurisdictions complicates platform development. 


CEOs Go All-In on AI, But Talent Isn't Ready

Despite the enthusiasm for AI, workforce readiness is still a critical concern. Approximately 74% of Indian CEOs see AI talent readiness as a determinant of their company's future success, yet 34% admit to a widening skills gap. This talent gap is multifaceted; it's not only technical proficiency that's in short supply, but also expertise in blending data science with ethics, regulatory understanding and business acumen. About 26% struggle to find candidates who balance technical skill with collaboration capabilities. ... Regulatory uncertainty still weighs heavily on CEOs' minds, with nearly half of Indian CEOs awaiting clearer regulatory guidance before pushing bold innovation initiatives, compared to only 39% globally. This cautious stance underlines a pragmatic approach to integrating AI amid evolving governance landscapes. About 76% of Indian CEOs worry that slow AI regulation progress could hinder organizational success. Ethical concerns also loom large: 62% of Indian CEOs cite them as significant barriers, slightly higher than the 59% global average, underscoring the importance of embedding trust and governance frameworks alongside technological investments. "This is why culture and leadership are very important. The board of directors must have a degree of AI literacy. There must be psychological safety in the organization. Employees must feel safe and if there's clear governance, it means there is a proactive suggestion to use sanctioned AI that meets security requirements," John Barker


Powering financial services innovation: The critical role of colocation

As AI continues to evolve, its impact on financial services is becoming both broader and deeper – moving beyond high-level innovation into the operational core of the enterprise. Today’s financial institutions face a dual mandate: to accelerate AI adoption in pursuit of competitive advantage, and to do so within the constraints of an increasingly complex digital and regulatory environment. From risk modelling and fraud prevention to real-time analytics and customer personalization, AI is being embedded into mission-critical functions. Realising its full potential, however, isn't solely a matter of algorithms – it hinges on having a data-first strategy, with the right infrastructure and governance in place. ... With exponential data growth presenting challenges, customers gain access to a secure, compliant, resilient, and performant foundation. This foundation enables the implementation of new technologies and seamless orchestration of data flows. Our goal is to simplify data management complexity and serve as the single, trusted, global data center partner for our customers. As organizations optimize their AI strategies, many are exploring cloud repatriation – the process of moving certain workloads from the cloud back to on-premises or colocation environments. This strategic move can be crucial for AI success, as it allows for better control over sensitive data, reduced latency, and improved performance for demanding AI workloads.


Measuring, Reporting, and Improving: Making Resilience Tangible and Accountable

A continuity plan sitting on a shelf provides little assurance of resilience. What matters is whether organizations can demonstrate their strategies work, they are tested, and corrective actions are tracked. Measurement transforms resilience from an abstract concept into quantifiable performance. ... Metrics ensure resilience is not left to chance or anecdote. They provide boards and regulators with evidence of progress, reinforcing accountability at the executive and governance levels. A resilience strategy that cannot be measured cannot be trusted. ... The first step in strengthening measurement is to define resilience key performance indicators (KPIs) and key risk indicators (KRIs). These metrics should evaluate outcomes rather than simply tracking activities, ensuring performance reflects actual readiness. ... Measurement alone is not enough without transparency. Organizations must establish reporting practices that make resilience performance visible to boards, regulators, and, when appropriate, customers. Sharing outcomes openly not only demonstrates accountability but also builds trust and credibility. ... One challenge organizations often encounter when measuring resilience is metric overload. In the effort to capture every detail, leaders may track too many indicators, creating complexity that dilutes focus and makes it difficult to interpret results. 


Bridging the Gap: Why DevOps Teams Are Quietly Becoming the Front Line of Security

For experienced DevOps practitioners, the idea of shifting security left isn't new. Static analysis in CI/CD pipelines, dependency scanning, and Infrastructure as Code (IaC) validation have become the norm. What's changed more recently is the pressure to respond to security events operationally, in addition to preventing them during builds. DevOps teams are adjusting in very real ways. Many are building security context into their logging practices, ensuring that logs are structured for debugging, and also for investigation and audit. Others are automating triage for security alerts using the same mindset they've applied to performance monitoring and deployment pipelines. Perhaps most importantly, DevOps teams are often the first to respond when something unusual shows up in system logs or access patterns. ... Security can be a shared responsibility across teams as long as boundaries and expectations are set. DevOps teams are defining their role in security more clearly by, for example, determining what gets logged, what counts as an anomaly, and who owns the investigation. They're also setting expectations around incident escalation, CVE response timeframes, and compliance requirements. When these lines are clear, security becomes an integrated part of the workflow instead of an extra burden. ... For many DevOps teams, security is part of the daily reality. It comes as a series of small, increasingly frequent interruptions.

Daily Tech Digest - October 19, 2025


; Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


How CIOs Can Close the IT Workforce Skills Gap for an AI-First Organization

Deliberately building AI skills among existing talent, rather than searching outside the organization for new hires or leaving skills development to chance, can help develop the desired institutional knowledge and build an IT-resilient workforce. AI-first is a strategic approach that guides the use of AI technology within an enterprise or a unit within it, with the intention of maximizing the benefits from AI. IT organizations must maintain ongoing skills development to be successful as an AI-first organization. ... In developing the future-state competency map, CIOs must include AI-specific skills and competencies, ensuring each role has measurable expectations aligned with the company’s strategic objectives related to AI. CIO must also partner with HR to design and establish AI literacy programs. While HR leaders are experts in scaling learning initiatives and standardizing tools, CIOs have more insight into foundational AI skills, training, and technical support required in the enterprise. CIOs should regularly review whether their teams’ AI capabilities contribute to faster product launches or improved customer insights. ... Addressing employees’ key concerns is a critical step for any AI change management initiative to be successful. AI is fundamentally changing traditional workplace operating models by democratizing access to technology, generating insights, and changing the relationship between people and technology.


20 Strategies To Strengthen Your Crisis Management Playbook

The regular review and refinement of protocols ensures alignment when a scenario arises. At our company, we centralize contacts, prepare for a range of scenarios and set outreach guidelines. This enables rapid response, timely updates and meaningful support, which safeguards trust and strengthens relationships with employees, stakeholders and clients. ... Unintended consequences often arise when stakeholder expectations are left out of crisis planning. Leaders should bake audience insights into their playbooks early—not after headlines hit. Anticipating concerns builds trust and gives you the clarity and credibility to lead through the tough moments. ... Know when to do nothing. Sometimes the instinct to respond immediately leads to increased confusion and puts your brand even further under the microscope. The best crisis managers know when to stop, see how things play out and respond accordingly (if at all), all while preparing for a variety of scenarios behind the scenes. ... Act like a board of directors. A crisis is not an event; it's a stress test of brand, enterprise and reputation infrastructure and resilience. Crisis plans must align with business continuity, incident response and disaster recovery plans. Marketing and communications must co-lead with the exec team, legal, ops and regulatory to guide action before commercial, brand equity and reputation risk escalates.


Abstract or die: Why AI enterprises can't afford rigid vector stacks

Without portability, organizations stagnate. They have technical debt from recursive code paths, are hesitant to adopt new technology and cannot move prototypes to production at pace. In effect, the database is a bottleneck rather than an accelerator. Portability, or the ability to move underlying infrastructure without re-encoding the application, is ever more a strategic requirement for enterprises rolling out AI at scale. ... Instead of having application code directly bound to some specific vector backend, companies can compile against an abstraction layer that normalizes operations like inserts, queries and filtering. This doesn't necessarily eliminate the need to choose a backend; it makes that choice less rigid. Development teams can start with DuckDB or SQLite in the lab, then scale up to Postgres or MySQL for production and ultimately adopt a special-purpose cloud vector DB without having to re-architect the application. ... What's happening in the vector space is one example of a bigger trend: Open-source abstractions as critical infrastructure; In data formats: Apache Arrow; In ML models: ONNX; In orchestration: Kubernetes; In AI APIs: Any-LLM and other such frameworks. These projects succeed, not by adding new capability, but by removing friction. They enable enterprises to move more quickly, hedge bets and evolve along with the ecosystem. Vector DB adapters continue this legacy, transforming a high-speed, fragmented space into infrastructure that enterprises can truly depend on. ...


AWS's New Security VP: A Turning Point for AI Cybersecurity Leadership?

"As we move forward into 2026, the breadth and depth of AI opportunities, products, and threats globally present a paradigm shift in cyber defense," Lohrmann said. He added that he was encouraged by AWS's recognition of the need for additional focus and attention on these cyberthreats. ... "Agentic AI attackers can now operate with a 'reflection loop' so they are effectively self-learning from failed attacks and modifying their attack approach automatically," said Simon Ratcliffe, fractional CIO at Freeman Clarke. "This means the attacks are faster and there are more of them … putting overwhelming pressure on CISOs to respond." ... "I think the CISO's role will evolve to meet the broader governance ecosystem, bringing together AI security specialists, data scientists, compliance officers, and ethics leads," she said, adding cybersecurity's mantra that AI security is everyone's business. "But it demands dedicated expertise," she said. "Going forward, I hope that organizations treat AI governance and assurance as integral parts of cybersecurity, not siloed add-ons." ... In Liebig's opinion, the future of cybersecurity leadership looks less hierarchical than it does now. "As for who owns that risk, I believe the CISO remains accountable, but new roles are emerging to operationalize AI integrity -- model risk officers, AI security architects, and governance engineers," he explained. "The CISO's role should expand horizontally, ensuring AI aligns to enterprise trust frameworks, not stand apart from them."


The Top 5 Technology Trends For 2026

In recent years, we've seen industry, governments, education and everyday folk scrambling to adapt to the disruptive impact of AI. But by 2026, we're starting to get answers to some of the big questions around its effect on jobs, business and day-to-day life. Now, the focus shifts from simply reacting to reinventing and reshaping in order to find our place in this brave, different and sometimes frightening new world.  ... Rather than simply answering questions and generating content, agents take action on our behalf, and in 2026, this will become an increasingly frequent and normal occurrence in everyday life. From automating business decision-making to managing and coordinating hectic family schedules, AI agents will handle the “busy work” involved in planning and problem-solving, freeing us up to focus on the big picture or simply slowing down and enjoying life. ... Quantum computing harnesses the strange and seemingly counterintuitive behavior of particles at the sub-atomic level to accomplish many complex computing tasks millions of times faster than "classic" computers. For the last decade, there's been excitement and hype over their performance in labs and research environments, but in 2026, we are likely to see further adoption in the real world. While this trend might not appear to noticeably affect us in our day-to-day lives, the impact on business, industry and science will begin to take shape in noticeable ways.


How Successful CTOs Orchestrate Business Results at Every Stage

As companies mature, their technical needs shift from building for the present to a long-term vision, strategic partnerships, and leveraging technology to drive business goals. The Strategist CTO combines deep technical acumen with business acumen and a deep understanding of the customer journey. This leader collaborates with other executives on strategic planning, but always through the lens of where customers are heading, not strictly where technology is going.  ... For large enterprises with complex ecosystems and large customer bases, stability, security, and operational efficiency are paramount. This is where the Guardian CTO safeguards the customer experience through technical excellence.This leader oversees all aspects of technical infrastructure, ensuring the reliability, security, and availability of core technology assets with a clear understanding that every decision directly impacts customer trust. ... While these operational models often align with company growth stages, they aren't rigid. A company's needs can shift rapidly due to market conditions, competitive pressures, or unexpected challenges, and customer expectations can evolve just as quickly. ... The most successful companies create environments where technical leadership evolves in response to changing business needs, empowering technical leaders to pivot their focus from building to strategizing, or from innovating to safeguarding, as circumstances demand.


Financial services seek balance of trust, inclusion through face biometrics advances

Advances in the flexibility of face biometric liveness, deepfake detection and cross-sectoral collaboration represent the latest measures against fraud in remote financial services. A digital bank in the Philippines is integrating iProov’s face biometrics and liveness detection, OneConnect and a partner are entering a sandbox to work on protecting against deepfakes, and an event held by Facephi in Mexico explored the challenges of financial services trying to maintain digital trust while advancing inclusion. ... The Philippine digital bank will deploy advanced liveness detection tools as part of a new risk-based authentication strategy. “Our mission is to uplift the lives of all Filipinos through a secure, trusted, and accessible digital bank for all Filipinos, and that requires deploying resilient infrastructure capable of addressing sophisticated fraud,” said Russell Hernandez, chief information security officer at UnionDigital Bank. “As we shift toward risk-based authentication, we need a flexible and future-ready solution. iProov’s internationally proven ability to deliver ease of use, speed, and high security assurance – backed by reliable vendor support – ensures we can evolve our fraud defenses while sustaining customer trust and confidence.” ... The Mexican government has launched several initiatives to standardize digital identity infrastructure, including Llave MX — a single sign-on platform for public services — and the forthcoming National Digital Identity Document, designed to harmonize verification across sectors.


Why context, not just data, will define the future of AI in finance

Raw intelligence in AI and its ability to crunch numbers and process data is only one part of the equation. What it fundamentally lacks is wisdom, which comes from context. In areas like personal finance, building powerful models with deep domain knowledge is critical. The challenges range from misinterpretation of data to regulatory oversights that directly affect value for customers. That’s why at Intuit, we put “context at the core of AI.” This means moving beyond generic datasets to build specialised Financial Large Language Models (LLMs) trained on decades of anonymised financial expertise. It’s about understanding the interconnected journey of our customers across our ecosystem—from the freelancer managing invoices in QuickBooks to that same individual filing taxes with TurboTax, to them monitoring their financial health on Credit Karma. ... In the age of GenAI, craftsmanship in engineering is being redefined. It’s no longer just about writing every line of code or building models from scratch, but about architecting robust, extensible systems that empower others to innovate. The very soul of engineering is transcending code to become the art of architecture. The measure of excellence is no longer found in the meticulous construction of every model, but in the visionary design of systems that empower domain experts to innovate. With tools like GenStudio and GenUX abstracting complexity, the engineer’s role isn’t diminished but elevated. They evolve from builders of applications to architects of innovation ecosystems. 


The modernization mirage: CIOs must see through it to play the long game

Enterprise architecture, in too many organizations, has been reduced to frameworks: TOGAF, Zachman, FEAF. These models provide structure but rarely move capital or inspire investor trust. Boards don’t want frameworks. They want influence. That’s why I developed the Architecture Influence Flywheel — a practical model I use in board and transformation discussions. It rests on three pivots - Outcomes: Every architectural choice must tie directly to board-level priorities — growth, resilience, efficiency. ... Relationships: CIOs must serve as business-technology translators. Express progress not in technical jargon, but in investor language — return on capital, return on innovation, margin expansion and risk mitigation. ... Visible wins: Influence grows through undeniable demonstrations. A system that cuts onboarding time by 40%, an AI model that reduces fraud losses or an audit process that clears in half the time — these visible wins build momentum. ... Technologies rise and fall. Frameworks evolve. Titles shift. But one principle endures: What leaders tolerate defines their legacy. Playing the long game requires CIOs to ask uncomfortable questions:Will we tolerate AI models we cannot explain to regulators? Will we tolerate unchecked cloud sprawl without financial discipline? Will we tolerate compliance as a box-ticking exercise rather than a growth enabler? 


What Is Cybersecurity Platformization?

Cybersecurity platformization is a strategic response to this complexity. It’s the move from a collection of disparate point solutions to a single, unified platform that integrates multiple security functions. Dickson describes it as the “canned integration of security tools so that they work together holistically to make the installation, maintenance and operation easier for the end customer across various tools in the security stack.” ... The most significant hidden cost of a fragmented, multitool security strategy is labor. Managing disconnected tools is a resource strain on an organization, as it requires individuals with specialized skills for each tool. This includes the labor-intensive task of managing API integrations and manually coding “shims,” or integrations to translate data between different tools, which often have separate protocols and proprietary interfaces, Dukes says. Beyond the cost of personnel, there’s the operational complexity.  ... One of the most immediate benefits of adopting a platform approach is cost reduction. This includes not only the reduction in licensing fees but also a reduction in the operational complexity and the number of specialized employees needed. ... Another key benefit is the well-worn concept of a “single pane of glass,” a single dashboard that enables IT security teams to have easier management and reporting. Instead of multiple tools with different interfaces and data formats, a unified platform streamlines everything into a single, cohesive view.

Daily Tech Digest - September 17, 2025


Quote for the day:

“We are all failures - at least the best of us are.” -- J.M. Barrie


AI Governance Reaches an Inflection Point

AI adoption has made privacy, compliance, and risk management dramatically more complex. Unlike traditional software, AI models are probabilistic, adaptive, and capable of generating outcomes that are harder to predict or explain. As Blake Brannon, OneTrust’s chief innovation officer, summarized: “The speed of AI innovation has exposed a fundamental mismatch. While AI projects move at unprecedented speed, traditional governance processes are operating at yesterday’s pace.” ... These dynamics explain why, several years ago, Dresner Advisory Services shifted its research lens from data governance to data and analytics (D&A) governance. AI adoption makes clear that organizations must treat governance not as a siloed discipline, but as an integrated framework spanning data, analytics, and intelligent systems. D&A governance is broader in scope than traditional data governance. It encompasses policies, standards, decision rights, procedures, and technologies that govern both data and analytic content across the organization. ... The modernization is not just about oversight — it is about rethinking priorities. Survey respondents identify data quality and controlled access as the most critical enablers of AI success. Security, privacy, and the governance of data models follow closely behind. Collectively, these priorities reflect an emerging consensus: The real foundation of successful AI is not model architecture, but disciplined, transparent, and enforceable governance of data and analytics.


Shai-Hulud Supply Chain Attack: Worm Used to Steal Secrets, 180+ NPM Packages Hit

The packages were injected with a post-install script designed to fetch the TruffleHog secret scanning tool to identify and steal secrets, and to harvest environment variables and IMDS-exposed cloud keys. The script also validates the collected credentials and, if GitHub tokens are identified, it uses them to create a public repository and dump the secrets into it. Additionally, it pushes a GitHub Actions workflow that exfiltrates secrets from each repository to a hardcoded webhook, and migrates private repositories to public ones labeled ‘Shai-Hulud Migration’. ... What makes the attack different is malicious code that uses any identified NPM token to enumerate and update the packages that a compromised maintainer controls, to inject them with the malicious post-install script. “This attack is a self-propagating worm. When a compromised package encounters additional NPM tokens in a victim environment, it will automatically publish malicious versions of any packages it can access,” Wiz notes. ... The security firm warns that the self-spreading potential of the malicious code will likely keep the campaign alive for a few more days. To avoid being infected, users should be wary of any packages that have new versions on NPM but not on GitHub, and are advised to pin dependencies to avoid unexpected package updates.


Scattered Spider Tied to Fresh Attacks on Financial Services

The financial services sector appears to remain at high risk of attack by the group. Over the past two months, elements of Scattered Spider registered "a coordinated set of ticket-themed phishing domains and Salesforce credential harvesting pages" designed to target the financial services sector as well as providers of technology services, suggesting a continuing focus on those sectors, ReliaQuest said. Registering lookalike domain names is a repeat tactic used by many attackers, from Chinese nation-state groups to Scattered Spider. Such URLs are designed to trick victims into thinking a link that they visit is legitimate. ... Members of Scattered Spider and ShinyHunters excel at social engineering, including voice phishing, aka vishing. This often involves tricking a help desk into believing the attacker is a legitimate employee, leading to passwords being reset and single sign-on tokens intercepted. In some cases, experts say, the attackers trick a victim into visiting lookalike support panels they've created which are part of a phishing attack. Since the middle of the year, members of Scattered Spider have breached British retailers Marks & Spencer, followed by American retailers such as Adidas and Victoria's Secret. The group has been targeting American insurers such as Aflac and Allianz Life, global airlines including Air France, KLM and Qantas, and technology giants Cisco and Google.


Tech’s Tarnished Image Spurring Rise of Chief Trust Officers

In today’s highly competitive world, organizations need every advantage they can get, which can include trust. “Part of selecting vendors, whether it is an official part of the process or not, is evaluating the trust you have in that vendor,” explained Erich Kron ... “By signifying someone in a high level of leadership as the person responsible and accountable for culminating and maintaining that level of trust, the organization may gain significant competitive advantages through loyalty and through competitive means,” he told TechNewsWorld. “The chief trust officer role is a visible, external and internal sign of an organization’s commitment to trust,” added Jim Alkove. ... “It’s an explicit statement of intent to your employees, to your customers, to your partners, to governments that your company cares so much about trust and that you’ve announced that there’s a leader responsible for it,” Alkove, a former CTrO at Salesforce, told TechNewsWorld. ... Forrester noted that trust has become a revenue problem for B2B software companies, and CTrOs provide a means to resolve issues that could stall deals and impact revenue. “When procurement and third-party risk management teams identified issues with a business partner’s cybersecurity posture, contracts stalled,” the report explained. “These issues reflected on the competence, consistency, and dependability of the potential partner. Chief trust officers and their teams step in to remove those obstacles and move deals along.”


AI ROI Isn't About Cost Savings Anymore

The traditional metrics of ROI, including cost savings, headcount reduction and revenue uplift, are no longer sufficient. Let's start with the obvious challenge: ROI today is often measured vertically, at the use-case or project level, tracking model accuracy or incremental sales. Although necessary, this vertical lens misses the broader picture. What's needed is a horizontal perspective on ROI - metrics that capture how investments in cloud infrastructure, data engineering and cross-silo integration accelerate every subsequent AI initiative. ... When data is cleaned and standardized for one use case, the next model development becomes faster and more reliable. Yet these productivity gains rarely appear in ROI calculations. The same applies to interoperability across functions. For example, predictive models developed for finance may inform HR or marketing strategies, multiplying AI's value in ways traditional KPIs overlook. ... Emerging models, such as Gartner's multidimensional AI measurement frameworks, and India's evolving AI governance standards offer early guidance. But turning them into practice requires rigor - from assessing how data improvements accelerate downstream use cases to quantifying cross-team synergies, and even recognizing softer outcomes like trust and employee well-being. "AI is neither hype nor savior - it is a tool," Gupta said.


How a fake ICS network can reveal real cyberattacks

Most ICS honeypots today are low interaction, using software to simulate devices like programmable logic controllers (PLCs). These setups are useful for detecting basic threats but are easy for skilled attackers to identify. Once attackers realize they are interacting with a decoy, they stop revealing their tactics. ... ICSLure takes a different approach. It combines actual PLC hardware with realistic simulations of physical processes, such as the movement of machinery on a factory floor. This creates what the researchers call a very high interaction environment. For attackers, ICSLure feels like a live industrial network. For defenders, it provides more accurate data about how adversaries move inside an ICS environment and the techniques they use to disrupt operations. Angelo Furfaro, one of the researchers behind ICSLure, told Help Net Security that deploying this type of environment safely requires careful planning. “The honeypot infrastructure must be completely segregated from any production network through dedicated VLANs, firewalls, and demilitarized zones, ensuring that malicious activity cannot spill over into critical operations,” he said. “PLCs should only interact with simulated plants or digital twins, eliminating the possibility of executing harmful commands on physical processes.”


The Biggest Barriers Blocking Agentic AI Adoption

To achieve the critical mass of adoption needed to fuel mainstream adoption of AI agents, we have to be able to trust them. This is true on several levels; we have to trust them with the sensitive and personal data they need to make decisions on our behalf, and we have to trust that the technology works, our efforts aren’t hampered by specific AI flaws like hallucinations. And if we are trusting it to make serious decisions, such as buying decisions, we have to trust that it will make the right ones and not waste our money. ... Another problem is that agentic AI relies on the ability of agents to interact and operate with third-party systems, and many third-party systems aren’t set up to work with this yet. Computer-using agents (such as OpenAI Operator and Manus AI) circumvent this by using computer vision to understand what’s on a screen. This means they can use many websites and apps just like we can, whether or not they’re programmed to work with them. ... Finally, there are wider cultural concerns that go beyond technology. Some people are uncomfortable with the idea of letting AI make decisions for them, regardless of how routine or mundane those decisions may be. Others are nervous about the impact that AI will have on jobs, society or the planet. These are all totally valid and understandable concerns and can’t be dismissed as barriers to be overcome simply through top-down education and messaging.


The Legal Perils of Dark Patterns in India: Intersection between Data Privacy and Consumer Protection

Dark patterns are any deceptive design pattern using UI or UX that misleads or tricks users by subverting their autonomy and manipulating them into taking actions which otherwise they would not have taken. Coined by UX designer Harry Brignull, who registered a website called darkpatterns.org, which he intended to be designed like a library wherein all types of such UX/UI designs are showcased in public interest, hence the name “dark pattern” came into being. ... The CCPA can order for recall goods, withdraw services or even stop such services in instance it finds that an entity is engaging in dark pattern as per Section 20 of the CP Act, in instance of breach of guidelines. ... By their very design, some patterns harm the user in two ways: first, by manipulating them into choices they would not have otherwise made; and second, by compelling the collection or processing of personal data in ways that breach data protection requirements. In such cases, the entity is not only exploiting the individual but is also failing to meet its legal duties under the DPDPA thereby creating exposure under both the CP Act and the DPDPA. ... Under the DPDPA, the stakes are now significantly higher. The Data Protection Board of India has the authority to impose financial penalties of up to Rs 50 crores for not obtaining purposeful consent or for disregarding technical and organisational measures.


In Order to Scale AI with Confidence, Enterprise CTOs Must Unlock the Value of Unstructured Data

Over the past two years, we’ve witnessed rapid advancements in Large Language Models (LLMs). As these models become increasingly powerful–and more commoditized–the true competitive edge for enterprises will lie in how effectively they harness their internal data. Unstructured content forms the foundation of modern AI systems, making it essential for organizations to build strong unstructured data infrastructure to succeed in the AI-driven era. This is what we mean by an unstructured data foundation: the ability for companies to rapidly identify what unstructured data exists across the organization, assess its quality, sensitivity, and safety, enrich and contextualize it to improve AI performance, and ultimately create a governed system for generating and maintaining high-quality data products at scale. In 2025, unstructured data is as much about quality as it is about quantity. “Quality” in the context of unstructured data remains largely uncharted territory. Companies need clear frameworks to assess dimensions like relevance, freshness, and duplication. Over the past six years, the volume and variety of unstructured data–and the number of AI applications that generate or depend on it–have exploded. Many have called it the largest and most valuable source of data within an organization, and I’d agree–especially as AI becomes increasingly central to how enterprises operate. Here’s why.


Scaling Databases for Large Multi-Tenant Applications

Building and maintaining multi-tenant database applications is one of the more challenging aspects of being a developer, administrator or analyst. Until the debut of AI systems, with their power-hungry GPUs, database workloads represented the most expensive workloads because of their demands on memory, CPU and storage performance to work effectively. ... Sharding is a data management technique that effectively partitions data across multiple databases. At its center, you need something that I like to call a command and control database. Still, I've also seen it called a shard-map manager or a router database. This database contains the metadata around the shards and your environment, and routes application calls to the appropriate shard or database. ... If you are working on the Microsoft stack, I'm going to give a shout out to elastic database tools . This .NET library gives you all the tools like shard-map management, the ability to do data-dependent routing, and doing multi-shard queries as needed. Additionally, consider the ability to add and remove shards to match shifting demands. ... Some other tooling you need to think about in planning, are how to execute schema changes across your partitions. Database DevOps is a mature practice, but rolling out changes across is fleet of databases requires careful forethought and operations.