Daily Tech Digest - September 17, 2025


Quote for the day:

“We are all failures - at least the best of us are.” -- J.M. Barrie


AI Governance Reaches an Inflection Point

AI adoption has made privacy, compliance, and risk management dramatically more complex. Unlike traditional software, AI models are probabilistic, adaptive, and capable of generating outcomes that are harder to predict or explain. As Blake Brannon, OneTrust’s chief innovation officer, summarized: “The speed of AI innovation has exposed a fundamental mismatch. While AI projects move at unprecedented speed, traditional governance processes are operating at yesterday’s pace.” ... These dynamics explain why, several years ago, Dresner Advisory Services shifted its research lens from data governance to data and analytics (D&A) governance. AI adoption makes clear that organizations must treat governance not as a siloed discipline, but as an integrated framework spanning data, analytics, and intelligent systems. D&A governance is broader in scope than traditional data governance. It encompasses policies, standards, decision rights, procedures, and technologies that govern both data and analytic content across the organization. ... The modernization is not just about oversight — it is about rethinking priorities. Survey respondents identify data quality and controlled access as the most critical enablers of AI success. Security, privacy, and the governance of data models follow closely behind. Collectively, these priorities reflect an emerging consensus: The real foundation of successful AI is not model architecture, but disciplined, transparent, and enforceable governance of data and analytics.


Shai-Hulud Supply Chain Attack: Worm Used to Steal Secrets, 180+ NPM Packages Hit

The packages were injected with a post-install script designed to fetch the TruffleHog secret scanning tool to identify and steal secrets, and to harvest environment variables and IMDS-exposed cloud keys. The script also validates the collected credentials and, if GitHub tokens are identified, it uses them to create a public repository and dump the secrets into it. Additionally, it pushes a GitHub Actions workflow that exfiltrates secrets from each repository to a hardcoded webhook, and migrates private repositories to public ones labeled ‘Shai-Hulud Migration’. ... What makes the attack different is malicious code that uses any identified NPM token to enumerate and update the packages that a compromised maintainer controls, to inject them with the malicious post-install script. “This attack is a self-propagating worm. When a compromised package encounters additional NPM tokens in a victim environment, it will automatically publish malicious versions of any packages it can access,” Wiz notes. ... The security firm warns that the self-spreading potential of the malicious code will likely keep the campaign alive for a few more days. To avoid being infected, users should be wary of any packages that have new versions on NPM but not on GitHub, and are advised to pin dependencies to avoid unexpected package updates.


Scattered Spider Tied to Fresh Attacks on Financial Services

The financial services sector appears to remain at high risk of attack by the group. Over the past two months, elements of Scattered Spider registered "a coordinated set of ticket-themed phishing domains and Salesforce credential harvesting pages" designed to target the financial services sector as well as providers of technology services, suggesting a continuing focus on those sectors, ReliaQuest said. Registering lookalike domain names is a repeat tactic used by many attackers, from Chinese nation-state groups to Scattered Spider. Such URLs are designed to trick victims into thinking a link that they visit is legitimate. ... Members of Scattered Spider and ShinyHunters excel at social engineering, including voice phishing, aka vishing. This often involves tricking a help desk into believing the attacker is a legitimate employee, leading to passwords being reset and single sign-on tokens intercepted. In some cases, experts say, the attackers trick a victim into visiting lookalike support panels they've created which are part of a phishing attack. Since the middle of the year, members of Scattered Spider have breached British retailers Marks & Spencer, followed by American retailers such as Adidas and Victoria's Secret. The group has been targeting American insurers such as Aflac and Allianz Life, global airlines including Air France, KLM and Qantas, and technology giants Cisco and Google.


Tech’s Tarnished Image Spurring Rise of Chief Trust Officers

In today’s highly competitive world, organizations need every advantage they can get, which can include trust. “Part of selecting vendors, whether it is an official part of the process or not, is evaluating the trust you have in that vendor,” explained Erich Kron ... “By signifying someone in a high level of leadership as the person responsible and accountable for culminating and maintaining that level of trust, the organization may gain significant competitive advantages through loyalty and through competitive means,” he told TechNewsWorld. “The chief trust officer role is a visible, external and internal sign of an organization’s commitment to trust,” added Jim Alkove. ... “It’s an explicit statement of intent to your employees, to your customers, to your partners, to governments that your company cares so much about trust and that you’ve announced that there’s a leader responsible for it,” Alkove, a former CTrO at Salesforce, told TechNewsWorld. ... Forrester noted that trust has become a revenue problem for B2B software companies, and CTrOs provide a means to resolve issues that could stall deals and impact revenue. “When procurement and third-party risk management teams identified issues with a business partner’s cybersecurity posture, contracts stalled,” the report explained. “These issues reflected on the competence, consistency, and dependability of the potential partner. Chief trust officers and their teams step in to remove those obstacles and move deals along.”


AI ROI Isn't About Cost Savings Anymore

The traditional metrics of ROI, including cost savings, headcount reduction and revenue uplift, are no longer sufficient. Let's start with the obvious challenge: ROI today is often measured vertically, at the use-case or project level, tracking model accuracy or incremental sales. Although necessary, this vertical lens misses the broader picture. What's needed is a horizontal perspective on ROI - metrics that capture how investments in cloud infrastructure, data engineering and cross-silo integration accelerate every subsequent AI initiative. ... When data is cleaned and standardized for one use case, the next model development becomes faster and more reliable. Yet these productivity gains rarely appear in ROI calculations. The same applies to interoperability across functions. For example, predictive models developed for finance may inform HR or marketing strategies, multiplying AI's value in ways traditional KPIs overlook. ... Emerging models, such as Gartner's multidimensional AI measurement frameworks, and India's evolving AI governance standards offer early guidance. But turning them into practice requires rigor - from assessing how data improvements accelerate downstream use cases to quantifying cross-team synergies, and even recognizing softer outcomes like trust and employee well-being. "AI is neither hype nor savior - it is a tool," Gupta said.


How a fake ICS network can reveal real cyberattacks

Most ICS honeypots today are low interaction, using software to simulate devices like programmable logic controllers (PLCs). These setups are useful for detecting basic threats but are easy for skilled attackers to identify. Once attackers realize they are interacting with a decoy, they stop revealing their tactics. ... ICSLure takes a different approach. It combines actual PLC hardware with realistic simulations of physical processes, such as the movement of machinery on a factory floor. This creates what the researchers call a very high interaction environment. For attackers, ICSLure feels like a live industrial network. For defenders, it provides more accurate data about how adversaries move inside an ICS environment and the techniques they use to disrupt operations. Angelo Furfaro, one of the researchers behind ICSLure, told Help Net Security that deploying this type of environment safely requires careful planning. “The honeypot infrastructure must be completely segregated from any production network through dedicated VLANs, firewalls, and demilitarized zones, ensuring that malicious activity cannot spill over into critical operations,” he said. “PLCs should only interact with simulated plants or digital twins, eliminating the possibility of executing harmful commands on physical processes.”


The Biggest Barriers Blocking Agentic AI Adoption

To achieve the critical mass of adoption needed to fuel mainstream adoption of AI agents, we have to be able to trust them. This is true on several levels; we have to trust them with the sensitive and personal data they need to make decisions on our behalf, and we have to trust that the technology works, our efforts aren’t hampered by specific AI flaws like hallucinations. And if we are trusting it to make serious decisions, such as buying decisions, we have to trust that it will make the right ones and not waste our money. ... Another problem is that agentic AI relies on the ability of agents to interact and operate with third-party systems, and many third-party systems aren’t set up to work with this yet. Computer-using agents (such as OpenAI Operator and Manus AI) circumvent this by using computer vision to understand what’s on a screen. This means they can use many websites and apps just like we can, whether or not they’re programmed to work with them. ... Finally, there are wider cultural concerns that go beyond technology. Some people are uncomfortable with the idea of letting AI make decisions for them, regardless of how routine or mundane those decisions may be. Others are nervous about the impact that AI will have on jobs, society or the planet. These are all totally valid and understandable concerns and can’t be dismissed as barriers to be overcome simply through top-down education and messaging.


The Legal Perils of Dark Patterns in India: Intersection between Data Privacy and Consumer Protection

Dark patterns are any deceptive design pattern using UI or UX that misleads or tricks users by subverting their autonomy and manipulating them into taking actions which otherwise they would not have taken. Coined by UX designer Harry Brignull, who registered a website called darkpatterns.org, which he intended to be designed like a library wherein all types of such UX/UI designs are showcased in public interest, hence the name “dark pattern” came into being. ... The CCPA can order for recall goods, withdraw services or even stop such services in instance it finds that an entity is engaging in dark pattern as per Section 20 of the CP Act, in instance of breach of guidelines. ... By their very design, some patterns harm the user in two ways: first, by manipulating them into choices they would not have otherwise made; and second, by compelling the collection or processing of personal data in ways that breach data protection requirements. In such cases, the entity is not only exploiting the individual but is also failing to meet its legal duties under the DPDPA thereby creating exposure under both the CP Act and the DPDPA. ... Under the DPDPA, the stakes are now significantly higher. The Data Protection Board of India has the authority to impose financial penalties of up to Rs 50 crores for not obtaining purposeful consent or for disregarding technical and organisational measures.


In Order to Scale AI with Confidence, Enterprise CTOs Must Unlock the Value of Unstructured Data

Over the past two years, we’ve witnessed rapid advancements in Large Language Models (LLMs). As these models become increasingly powerful–and more commoditized–the true competitive edge for enterprises will lie in how effectively they harness their internal data. Unstructured content forms the foundation of modern AI systems, making it essential for organizations to build strong unstructured data infrastructure to succeed in the AI-driven era. This is what we mean by an unstructured data foundation: the ability for companies to rapidly identify what unstructured data exists across the organization, assess its quality, sensitivity, and safety, enrich and contextualize it to improve AI performance, and ultimately create a governed system for generating and maintaining high-quality data products at scale. In 2025, unstructured data is as much about quality as it is about quantity. “Quality” in the context of unstructured data remains largely uncharted territory. Companies need clear frameworks to assess dimensions like relevance, freshness, and duplication. Over the past six years, the volume and variety of unstructured data–and the number of AI applications that generate or depend on it–have exploded. Many have called it the largest and most valuable source of data within an organization, and I’d agree–especially as AI becomes increasingly central to how enterprises operate. Here’s why.


Scaling Databases for Large Multi-Tenant Applications

Building and maintaining multi-tenant database applications is one of the more challenging aspects of being a developer, administrator or analyst. Until the debut of AI systems, with their power-hungry GPUs, database workloads represented the most expensive workloads because of their demands on memory, CPU and storage performance to work effectively. ... Sharding is a data management technique that effectively partitions data across multiple databases. At its center, you need something that I like to call a command and control database. Still, I've also seen it called a shard-map manager or a router database. This database contains the metadata around the shards and your environment, and routes application calls to the appropriate shard or database. ... If you are working on the Microsoft stack, I'm going to give a shout out to elastic database tools . This .NET library gives you all the tools like shard-map management, the ability to do data-dependent routing, and doing multi-shard queries as needed. Additionally, consider the ability to add and remove shards to match shifting demands. ... Some other tooling you need to think about in planning, are how to execute schema changes across your partitions. Database DevOps is a mature practice, but rolling out changes across is fleet of databases requires careful forethought and operations. 

No comments:

Post a Comment