Daily Tech Digest - September 21, 2025


Quote for the day:

"The world's most deadly disease is hardening of the attitudes." -- Zig Ziglar



AI sharpens threat detection — but could it dull human analyst skills?

While AI offers clear advantages, there are real risks when used without caution. Blind trust in AI-generated recommendations can lead to missed threats or incorrect actions, especially when professionals rely too heavily on prebuilt threat scores or automated responses. A lack of curiosity to validate findings weakens analysis and limits learning opportunities from edge cases or anomalies. This mirrors patterns seen in internet search behavior, where users often skim for quick answers rather than dig deeper. It bypasses critical thinking that strengthens neural connections and sparks new ideas. In cybersecurity — where stakes are high and threats evolve fast — human validation and healthy skepticism remain essential. ... AI literacy is becoming a must-have skill for cybersecurity teams, especially as more organizations adopt automation to handle growing threat volumes. Incorporating AI education into security training and tabletop exercises helps professionals stay sharp and confident when working alongside intelligent tools. When teams can spot AI bias or recognize hallucinated outputs, they’re less likely to take automated insights at face value. This kind of awareness supports better judgment and more effective responses. It also pays off, as organizations that use security AI and automation extensively save an average of $2.22 million in prevention costs. 


Repatriation games: the mid-market reevaluates its public cloud consumption

Many IT decision-makers were quick to blame public cloud service providers. But it’s more likely that the applications and workloads were never intended for public cloud environments. Or that cloud-enabled applications and workloads were incorrectly configured. Either way, poor application and workload performance meant that the expected efficiency gains and cost savings from public cloud adoption did not materialize. This led to budgeting and resourcing problems, as well as friction between IT management, senior leadership teams, and other stakeholders. ... Concerns over data sovereignty and compliance have also influenced decisions to repatriate public cloud workloads and adopt a hybrid cloud model, particularly due to worries about DORA, GDPR and the US Cloud Act compliance. DORA and GDPR both place greater emphasis on data sovereignty, so organizations need to have greater control over where their data resides. This makes a strong case for repatriation of specific workloads to maintain compliance with both sets of regulations – especially within highly regulated industries or for sensitive information such as HR or financial data. ... Nearly a third of respondents say cybersecurity specialists are the most difficult roles to hire or retain. Some mid-market organizations may lack the in-house skills to configure and manage cybersecurity in public cloud environments or even understand their default settings. 


A guide to de-risking enterprise-wide financial transformation

Distilling the lessons from these large-scale initiatives, a clear blueprint emerges for leaders embarking on their own transformation journeys:Define a data-driven vision: A successful transformation begins with a clear vision for how data will function as a strategic asset. The goal should be to create a single source of truth that is granular, accessible and enables a shift from reactive reporting to proactive analysis. Lead with process, not technology: Technology is an enabler, not the solution itself. Invest heavily in understanding and harmonizing end-to-end business processes before a single line of code is written. This effort is the foundation for a sustainable, low-customization system. De-risk with a phased, modular approach: Avoid the “big bang.” Break the program into logical phases, delivering tangible business value at each step. This builds momentum, facilitates organizational learning and significantly reduces the risk of catastrophic failure. Prioritize the user experience: Even the most powerful system will fail if it is not adopted. Engage end users throughout the design and implementation process. Build intuitive tools, like the FIRST microsite, and invest in robust training and change management to drive adoption and proficiency. ... Such forums are critical for breaking down silos and ensuring the end-to-end process is optimized.  ... Transforming the financial core of a global technology leader is not merely a technical undertaking, it is a strategic imperative for enabling scale, agility and insight.


5 things IT managers get wrong about upskilling tech teams

One of the most pervasive issues in IT upskilling is what Patrice Williams-Lindo, CEO at career coaching service Career Nomad, called the “training-and-forgetting” approach. “Many managers send teams to training without any plan for application,” she said. “Employees return to overloaded sprints” with no guidance on how to incorporate what they’ve learned. Without application in their work, “new skills atrophy fast.” This problem is rooted in basic learning science.  ... Another major pitfall is the overemphasis on certifications as proof of capability. Managers often assume that a certification is going to solve a problem without considering whether it fits the day-to-day job, said Tim Beerman, CTO at managed service provider Ensono. What’s more, certification alone doesn’t equal real-world capability and doesn’t necessarily indicate that a person is competent, according to CGS’ Stephen. While a certification shows that someone has the capability to obtain learned knowledge, he said, it doesn’t guarantee practical application skills. ... Many IT managers fall into the trap of pursuing trendy technologies without connecting them to actual business needs. Williams-Lindo warned that focusing on hype skills without business alignment backfires. While AI, cloud, and blockchain sound strategic, she said, if they aren’t tied to current or near-future business objectives, teams will spend time learning irrelevant tools while core needs are ignored.


Gen AI risks are getting clearer. How much would you pay for digital trust?

“As AI becomes more pervasive and kind of invades various dimensions of our lives and our work, how we interact with it and how safe and trustworthy it is, has become paramount,” said Dan Hays ... What do trust and safety issues look like, when it comes to AI agents in customer interactions? Hays gave several examples: Should AI agents remember everything that a particular customer says to them, or should it “forget” interactions, particularly as years or decades pass? The memory capabilities of bots also relate to the question of, what parameters should be placed on how AI agents are allowed to interact with customers? ... “As organizations across nearly all industries dive head-first into AI and digital transformations, they’re running into new risks that could undermine the trust they’ve built with consumers. Right now, many don’t have the guardrails or experience to handle these evolving threats — and the ripple effects are being felt across entire companies and industries,” the PwC report said. However, it seems that people who can, are willing to pay for digital environments and services that they can trust — much like subscribers to paywalled content sites can generally trust what they are getting, while those looking for free news might end up reading information that is garbled or deliberately twisted with the help of AI.


Object Storage: The Last Line of Defense Against Ransomware

Object storage provides intrinsic advantages in immutability, as it does not provide “edit in place” functionality as with file systems which are designed to allow direct file modifications. Unlike traditional file or block storage, object storage interacts through “get and put” access and write APIs, which means malware and ransomware actors have to attempt to write (or overwrite modified objects) via the API to the object store. ... As ransomware continues to evolve, organizations must design storage strategies that protect at every level. Cyber resilience in the storage layer involves a layered defense that spans architecture, APIs, and operational practices. ... A successful data center attack not only disrupts service but also undermines the partner’s reputation for reliability. Technology partners must demonstrate their infrastructure can isolate tenants, withstand attacks, and deliver continuous availability even in adverse conditions. In both cases, cyber-resilient storage is no longer optional. ... Business continuity leaders should prioritize S3-compatible object storage with ransomware-proof capabilities such as object locking, versioning, and multi-layered access controls. Just as importantly, they should evaluate whether their current storage platforms deliver end-to-end cyber resilience that spans both technology and process.


Time to Embrace Offensive Security for True Resilience

Offensive engagements utilize an attacker mindset to focus on truly exploitable weaknesses, weeding out the noise of unprioritized lists of vulnerabilities. Through remediation of high-impact findings, organizations prevent spreading resources over low-impact issues. Additionally, offloading sophisticated simulations to specialized teams or utilizing automated penetration testing speeds testing cycles and maximizes security investments. Essentially, each dollar invested in offensive testing can pre-empt multiples of breach response, legal penalties, lost productivity, and reputational loss. Successful security testing takes more than shallow scans; it needs fully immersed, real-world simulations that mimic the methods employed by actual threat actors to test your systems. Below is an overview of the most effective methods: ... Red teaming exercises goes beyond standard testing by simulating skilled threat actors with secretive, multi-step attack scenarios. These exercises check not just technical weaknesses but also the organization’s ability to notice, respond to, and recover from real security breaches. Red teams often use methods like social engineering, lateral movement, and privilege escalation to test incident response teams. This uncovers flaws in technology and human procedures during realistic attack simulations.


7 Enterprise Architecture Best Practices for 2025

The foundational principle of effective enterprise architecture is its direct and unbreakable link to business strategy. This alignment ensures that every technological decision, architectural blueprint, and IT investment serves a clear business purpose. It transforms the EA function from a cost center focused on technical standards into a strategic partner that drives business value, innovation, and competitive advantage. ... Adopting a framework establishes a shared understanding among stakeholders, from IT teams to business leaders. It provides a standardized set of tools, templates, and terminologies, which reduces ambiguity and improves communication. This structured approach is fundamental to creating a holistic and integrated view of the enterprise, allowing architects to manage complexity, mitigate risks, and align technology initiatives with strategic goals in a systematic way. ... While a strong strategy provides the direction for enterprise architecture, robust governance provides the necessary guardrails and decision-making framework to keep it on track. EA governance establishes the processes, standards, and controls that ensure architectural decisions align with business objectives and are implemented consistently across the organization. It transforms architecture from a set of recommendations into an enforceable, value-driven discipline. 


Why Cloud Repatriation is Critical Post-VMware Exit

What began as a tactical necessity evolved into an expensive operational habit, with monthly bills that continue climbing without corresponding business value. The rush to cloud often bypassed careful workload assessment, resulting in applications running in expensive public cloud environments that would be more cost-effective on-premises. ... Equally important, the technology landscape has evolved since the initial cloud migration wave. We now have universal infrastructure-wide operating platforms that deliver cloud-like experiences on-premises, eliminating the operational gaps that initially drove workloads to public cloud. Combined with universal migration capabilities that can move workloads seamlessly from any source—whether VMware, other hypervisors, or major cloud providers—organizations finally have the tools needed to make cloud repatriation both technically feasible and economically compelling. ... The forced VMware migration creates the perfect opportunity to reassess the entire infrastructure portfolio holistically rather than making isolated platform decisions. ... This infrastructure reset enables IT teams to ask fundamental questions that operational inertia prevents: Which workloads benefit from cloud deployment? What applications could run more affordably on modern on-premises infrastructure? How can we optimize our total infrastructure spend across both on-premises and cloud environments?


4 Ways AI Revolutionizes Modern Cybersecurity Strategy

AI's true value doesn't lie in marketing promises, but in concrete results(link is external), such as reducing false positives, cutting detection time, and reducing operational costs. These are documented results from organizations that have implemented AI-human collaboration models balancing automation with expert judgment. This capability significantly exceeds the efficiency of human security teams, fundamentally transforming threat detection and response. Imagine a zero-day exploit detected and contained within minutes, not days, drastically reducing the window of vulnerability. ... Accelerating the transformation of legacy code represents one of the most impactful ways organizations are using AI to mitigate vulnerabilities. Legacy code accounts for a staggering 70% of identified vulnerabilities(link is external), but manually overhauling monolithic code bases is rarely feasible. Security teams know these vulnerabilities exist, but often lack the resources to address them. ... Manual SBOM creation cannot scale, not even for a 10-person startup. DevSecOps teams already stretched thin can't reasonably be expected to monitor the thousands of components in modern software stacks. Any sustainable approach to SBOM management for software-producing organizations must necessarily include automation. ... Compliance remains one of security's greatest frictions. 

Daily Tech Digest - September 20, 2025


Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer


Five forces shaping the next wave of quantum innovation

Quantum computers are expected to solve problems currently intractable for even the world’s fastest supercomputers. Their core strengths — efficiently finding hidden patterns in complex datasets and navigating vast optimization challenges — will enable the design of novel drugs and materials, the creation of superior financial algorithms and open new frontiers in cryptography and cybersecurity. ... The quantum ecosystem now largely agrees that simply scaling up today’s computers, which suffer from significant noise and errors that prevent fault-tolerant operation, won’t unlock the most valuable commercial applications. The industry’s focus has shifted to quantum error correction as the key to building robust and scalable fault-tolerant machines. ... Most early quantum computing companies tried a full-stack approach. Now that the industry is maturing, a rich ecosystem of middle-of-the-stack players has emerged. This evolution allows companies to focus on what they do best and buy components and capabilities as needed, such as control systems from Quantum Machines and quantum software development from firms ... recent innovations in quantum networking technology have made a scale-out approach a serious contender. 


Post-Modern Ransomware: When Exfiltration Replaces Encryption

Exfiltration-first attacks have re-written the rules, with stolen data providing criminals with a faster, more reliable payday than the complex mechanics of encryption ever could. The threat of leaking data like financial records, intellectual property, and customer and employee details delivers instant leverage. Unlike encryption, if the victim stands firm and refuses to pay up, criminal groups can always sell their digital loot on the dark web or use it to fuel more targeted attacks. ... Phishing emails, once known for being riddled with tell-tale grammar and spelling mistakes, are now polished, personalized and delivered in perfect English. AI-powered deepfake voices and videos are providing convincing impersonations of executives or trusted colleagues that have defrauded companies for millions. At the same time, attackers are deploying custom chatbots to manage ransom negotiations across multiple victims simultaneously, applying pressure with the relentless efficiency of machines. ... Yet resilience is not simply a matter of dashboards and detection thresholds – it is equally about supporting those on the frontlines. Security leaders already working punishing hours under relentless scrutiny cannot be expected to withstand endless fatigue and a culture of blame without consequence. Organizations must also embed support for their teams into their response frameworks, from clear lines of communication and decompression time to wellbeing checks. 


The Data Sovereignty Challenge: How CIOs Are Adapting in Real Time

The uncertainty is driving concern. “There's been a lot more talk around, ‘Should we be managing sovereign cloud, should we be using on-premises more, should we be relying on our non-North American public contractors?” said Tracy Woo, a principal analyst with researcher and advisory firm Forrester. Ditching a major public cloud provider over sovereignty concerns, however, is not a practical option. These providers often underpin expansive global workloads, so migrating to a new architecture would be time-consuming, costly, and complex. There also isn’t a simple direct switch that companies can make if they’re looking to avoid public cloud; sourcing alternatives must be done thoughtfully, not just in reaction to one challenge. ... “There's a nervousness around deployment of AI, and I think that nervousness comes from -- definitely in conversations with other CIOs -- not knowing the data,” said Bell. Although decoupling from the major cloud providers is impractical on many fronts, issues of sovereignty as well as cost could still push CIOs to embrace a more localized approach, Woo said. “People are realizing that we don't necessarily need all the bells and whistles of the public cloud providers, whether that's for latency or performance reasons, or whether it's for cost or whether that's for sovereignty reasons,” explained Woo. 


Enterprise AI enters the age of agency, but autonomy must be governed

Agentic AI systems don’t just predict or recommend, they act. These intelligent software agents operate with autonomy toward defined business goals, planning, learning, and executing across enterprise workflows. This is not the next version of traditional automation or static bots. It’s a fundamentally different operating paradigm, one that will shape the future of digital enterprises. ... For many enterprises, the last decade of AI investment has focused on surfacing insights: detecting fraud, forecasting demand, and predicting churn. These are valuable outcomes, but they still require humans or rigid automation to respond. Agentic AI closes that gap. These agents combine machine learning, contextual awareness, planning, and decision logic to take goal-directed action. They can process ambiguity, work across systems, resolve exceptions, and adapt over time. ... Agentic AI will not simply automate tasks. It will reshape how work is designed, measured, and managed. As autonomous agents take on operational responsibility, human teams will move toward supervision, exception resolution, and strategic oversight. New KPIs will emerge, not just around cost or cycle time, but around agent quality, business impact, and compliance resilience. This shift will also demand new talent models. Enterprises must upskill teams to manage AI systems, not just processes. 


Cybersecurity in smart cities under scrutiny

The digital transformation of public services involves “an accelerated convergence between IT and OT systems, as well as the massive incorporation of connected IoT devices,” she explains, which gives rise to challenges such as an expanding attack surface or the coexistence of obsolete infrastructure with modern ones, in addition to a lack of visibility and control over devices deployed by multiple providers. ... “According to the European Cyber ​​Security Organisation, 86% of European local governments with IoT deployments have suffered some security breach related to these devices,” she says. Accenture’s Domínguez adds that the challenge is to consider “the fragmentation of responsibilities between administrations, concessionaires, and third parties, which complicates cybersecurity governance and requires advanced coordination models.” De la Cuesta also emphasizes the siloed nature of project development, which significantly hinders the development of an active cybersecurity strategy. ... In the integration of new tools, despite Spain holding a leading position in areas such as 5G, “technology moves much faster than the government’s ability to react,” he says. “It’s not like a private company, which has a certain agility to make investments,” he explains. “Public administration is much slower. Budgets are different. Administrative procedures are extremely long. From the moment a project is first discussed until it is actually executed, many years pass.”


Your SDLC Has an Evil Twin — and AI Built It

Welcome to the shadow SDLC — the one your team built with AI when you weren't looking: It generates code, dependencies, configs, and even tests at machine speed, but without any of your governance, review processes, or security guardrails. ... It’s not just about insecure code sneaking into production, but rather about losing ownership of the very processes you’ve worked to streamline. Your “evil twin” SDLC comes with: Unknown provenance → You can’t always trace where AI-generated code or dependencies came from. Inconsistent reliability → AI may generate tests or configs that look fine but fail in production. Invisible vulnerabilities → Flaws that never hit a backlog because they bypass reviews entirely. ... AI assistants are now pulling in OSS dependencies you didn’t choose — sometimes outdated, sometimes insecure, sometimes flat-out malicious. While your team already uses hygiene tools like Dependabot or Renovate, they’re only table stakes that don’t provide governance. ... The “evil twin” of your SDLC isn’t going away. It’s already here, writing code, pulling dependencies, and shaping workflows. The question is whether you’ll treat it as an uncontrolled shadow pipeline — or bring it under the same governance and accountability as your human-led one. Because in today’s environment, you don’t just own the SDLC you designed. You also own the one AI is building — whether you control it or not.


'ShadowLeak' ChatGPT Attack Allows Hackers to Invisibly Steal Emails

Researchers at Radware realized the issue earlier this spring, when they figured out a way of stealing anything they wanted from Gmail users who integrate ChatGPT. Not only was their trick devilishly simple, but it left no trace on an end user's network — not even an iota of the suspicious Web traffic typical of data exfiltration attacks. As such, the user had no way of detecting the attack, let alone stopping it. ... To perform a ShadowLeak attack, attackers send an outwardly normal-looking email to their target. They surreptitiously embed code in the body of the message, in a format that the recipient will not notice — for example, in extremely tiny text, or white text on a white background. The code should be written in HTML, being standard for email and therefore less suspicious than other, more powerful languages would be. ... The malicious code can instruct the AI to communicate the contents of the victim's emails, or anything else the target has granted ChatGPT access to, to an attacker-controlled server. ... Organizations can try to compensate with their own security controls — for example, by vetting incoming emails with their own tools. However, Geenens points out, "You need something that is smarter than just the regular-expression engines and the state machines that we've built. Those will not work anymore, because there are an infinite number of permutations with which you can write an attack in natural language." 


UK: World’s first quantum computer built using standard silicon chips launched

This is reportedly the first quantum computer to be built using the standard complementary metal-oxide-semiconductor (CMOS) chip fabrication process which is the same transistor technology used in conventional computers. A key part of this approach is building cryoelectronics that connect qubits with control circuits that work at very low temperatures, making it possible to scale up quantum processors greatly. “This is quantum computing’s silicon moment,” James Palles‑Dimmock, Quantum Motion’s CEO, stated. ... In contrast to other quantum computing approaches, the startup used high-volume industrial 300 millimeter chipmaking processes from commercial foundries to produce qubits. The architecture, control stack, and manufacturing approach are all built to scale to host millions of qubits and pave the way for fault-tolerant, utility-scale, and commercially viable quantum computing. “With the delivery of this system, Quantum Motion is on track to bring commercially useful quantum computers to market this decade,” Hugo Saleh, Quantum Motion’s CEO and president, revealed. ... The system’s underlying QPU is built on a tile-based architecture, integrating all compute, readout, and control components into a dense, repeatable array. This design enables future expansion to millions of qubits per chip, with no changes to the system’s physical footprint.


Key strategies to reduce IT complexity

The cloud has multiplied the fragmentation of solutions within companies, expanding the number of environments, vendors, APIs, and integration approaches, which has raised the skill set, necessitated more complex governance, and prompted the emergence of cross-functional roles between IT and business. Cybersecurity also introduces further levels of complexity, introducing new platforms, monitoring tools, regulatory requirements, and risk management approaches that must be overseen by expert personnel. And then there’s shadow IT. With the ease of access to cloud technologies, it’s not uncommon for business units to independently activate services without involving IT, generating further risks. ... “Structured upskilling and reskilling programs are needed to prepare people to manage new technologies,” says Massara. “So is an organizational model capable of managing a growing number of projects, which can no longer be handled in a one-off manner. The approach to project management is changing because the project portfolio has expanded significantly, and a structured PMO is required, with project managers who often no longer reside solely in IT, but directly within the business.” ... While it’s true that an IT system with disparate systems leads to greater complexity, companies are still very cost-conscious and wary about heavily investing in unification right away. But as systems become obsolete, they become more harmonized.


Unshackling IT: Why Third-Party Support Is a Strategic Imperative, Especially for AI

One of the most compelling arguments for independent third-party support is its inherent vendor neutrality. When a company relies solely on a software vendor for support, that vendor naturally has a vested interest in promoting its latest upgrades, cloud migrations, and proprietary solutions. This can create a conflict of interest, potentially pushing customers towards expensive, unnecessary upgrades or discouraging them from exploring alternatives that might be a better fit for their unique needs. ... The recent acquisition of VMware by Broadcom provides a compelling and timely illustration of why third-party support is becoming increasingly critical. Following the merger, many VMware customers have expressed significant dissatisfaction with changes to licensing models, product roadmaps, and, crucially, support. Broadcom has been criticized for restructuring VMware’s offerings and reportedly reducing support for smaller customers, pushing them towards bundled, more expensive solutions. ... The shift towards third-party support isn’t just about cost savings; it’s about regaining control, accessing unbiased expertise, and ensuring business continuity in a rapidly changing technological landscape. For companies making critical decisions about AI integration and managing complex enterprise systems, providers like Spinnaker Support offer a strategic advantage.

Daily Tech Digest - September 19, 2025


Quote for the day:

"The whole secret of a successful life is to find out what is one's destiny to do, and then do it." -- Henry Ford


How CISOs Can Drive Effective AI Governance

For CISOs, finding that balance between security and speed is critical in the age of AI. This technology simultaneously represents the greatest opportunity and greatest risk enterprises have faced since the dawn of the internet. Move too fast without guardrails, and sensitive data leaks into prompts, shadow AI proliferates, or regulatory gaps become liabilities. Move too slow, and competitors pull ahead with transformative efficiencies that are too powerful to compete with. Either path comes with ramifications that can cost CISOs their job. In turn, they cannot lead a "department of no" where AI adoption initiatives are stymied by the organization's security function. It is crucial to instead find a path to yes, mapping governance to organizational risk tolerance and business priorities so that the security function serves as a true revenue enabler. ... Even with strong policies and roadmaps in place, employees will continue to use AI in ways that aren't formally approved. The goal for security leaders shouldn't be to ban AI, but to make responsible use the easiest and most attractive option. That means equipping employees with enterprise-grade AI tools, whether purchased or homegrown, so they do not need to reach for insecure alternatives. In addition, it means highlighting and reinforcing positive behaviors so that employees see value in following the guardrails rather than bypassing them.


AI developer certifications tech companies want

Certifications help ensure developers understand AI governance, security, and responsible use, Hinchcliffe says. Certifications from vendors such as Microsoft and Google, along with OpenAI partner programs, are driving uptake, he says. “Strategic CIOs see certifications less as long-term guarantees of expertise and more as a short-term control and competency mechanism during rapid change,” he says. ... While certifications aren’t the sole deciding factor in landing a job, they often help candidates stand out in competitive roles where AI literacy is becoming a crucial factor, Taplin says. “This is especially true for new software engineers, who can gain a leg up by focusing on certifications early to enhance their career prospects,” he says. ... “The real demand is for AI skills, and certifications are simply one way to build those skills in a structured manner,” says Kyle Elliott, technology career coach and hiring expert. “Hiring managers are not necessarily looking for candidates with AI certifications,” Elliott says. “However, an AI certification, especially if completed in the last year or currently in progress, can signal to a hiring manager that you are well-versed in the latest AI trends. In other words, it’s a quick way to show that you speak the language of AI.” Software developers should not expect AI certifications to be a “silver bullet for landing a job or earning a promotion,” Elliott says. 


How important is data analytics in cycling?

Beyond recovery and nutrition, data analytics plays a pivotal role in shaping race-day decisions. The team combines structured data like power outputs, route elevation, and weather forecasts with unstructured data gathered from online posts by cycling enthusiasts. These data streams are fed into predictive models that anticipate race dynamics and help fine-tune equipment selection, down to tire pressure and aerodynamic adjustments. Metrics like Training Stress Score (TSS) and Heart Rate Variability (HRV) help monitor each rider’s fatigue and readiness, ensuring that training plans are both challenging and sustainable. “We analyze how environmental conditions affect each rider’s output and recovery,” Ryder says. ... The team’s data-driven strategy even extends to post-race analysis. At their hub, they evaluate power output, rider positioning, and performance variances. ... Looking ahead, Ryder sees artificial intelligence playing a greater role. The team is exploring machine learning models that predict tactical behavior from opponents and identify when riders are close to burnout. Through conversational analytics in Qlik, they envision proactive alerts such as, “This rider may not be fit to race tomorrow,” based on cumulative stress and recovery data. The team’s ethos is clear. Success doesn’t only come from racing harder. It comes from racing smarter. 


Balancing Growth and Sustainability: How Data Centers Can Navigate the Energy Demands of the AI Era

Given the systemic limitations on reliable power sources, practical solutions are needed. We must address power sustainability, upstream power infrastructure, new data center equipment and trained labor to deliver it all. By being proactive, we can “bend” the energy growth curve by decoupling data center growth from AI computing’s energy consumption. ... Before the AI boom, large data centers could grin and bear longer lead times for utilities; however, the immediate and skyrocketing demand for data centers to power AI applications calls for creative solutions. Data center developers and designers planning to build in energy-constrained regions need to consider deploying alternative prime power sources and/or energy storage systems to launch new data centers. This includes natural gas turbines, HVO-fueled generators, wind, solar, fuel cells, battery energy storage systems (BESS), and to a limited degree, small modular reactors. ... The utility company and grid operator’s intimate knowledge of the grid and local regulatory, governmental and political landscape makes them critical partners in the site selection, design, permitting, and construction of new data centers. Utilities provide critical insights on power capacity, costs, carbon intensity, power quality, grid stability and load management to ensure sustainable and reliable operations. 


LLMs can boost cybersecurity decisions, but not for everyone

Resilience played a major role in the results. High-resilience individuals performed well with or without LLM support, and they were better at using AI guidance without becoming over-reliant on it. Low-resilience participants did not gain as much from LLMs. In some cases, their performance did not improve or even declined. This creates a risk of uneven outcomes. Teams could see gaps widen between those who can critically evaluate AI suggestions and those who cannot. Over time, this may lead to over-reliance on models, reduced independent thinking, and a loss of diversity in how problems are approached. According to Lanyado, security leaders need to plan for these differences when building teams and training programs. “Not every organization and/or employee interacts with automation in the same way, and differences in team readiness can widen security risks,” he said. ... The findings suggest that organizations cannot assume adding an LLM will raise everyone’s performance equally. Without design, these tools could make some team members more effective while leaving others behind. The researchers recommend designing AI systems that adapt to the user. High-resilience individuals may benefit from open-ended suggestions. Lower-resilience users might need guidance, confidence indicators, or prompts that encourage them to consider alternative viewpoints.


Augment or Automate? Two Competing Visions for AI’s Economic Future

Looked at more critically, ChatGPT has become a supercharged Google search that leaps from finding information to synthesizing and judging it, a clear homogenization of human capacity that might lead to a world of grey-zone AI slop. ... While ChatGPT follows the people, Claude is following the money, hoping to capitalize on business needs to improve efficiency and productivity. By focusing on complex, high-value work, the company is signaling it believes the future of AI lies not in making everyone more productive, but in automating knowledge work that once required specialized human expertise. ... These divergent strategies result in different financial trajectories. OpenAI enjoys massive scale, with hundreds of millions of users providing a broad funnel for subscriptions. It generates an overwhelming amount of traffic that is of relatively lower value. OpenAI is betting the real money will flow through licensing its tools to Microsoft, where it can be embedded in Copilot and Office products to generate recurring revenue streams to offset its infrastructure and operating costs. Anthropic has fewer users but stronger unit economics. Its focus on enterprise use means customers are better positioned to purchase more expensive premium services that can demonstrate strong return-on-investment.


4 four ways to overcome the skills crisis and prepare your workforce for the age of AI

Orla Daly, CIO at Skillsoft, told ZDNET that the research shows business leaders must keep pace with the changing requirements for capabilities in different operational areas. "Significant percentages of skills are no longer relevant. The skills that we'll need in 2030 are only just evolving now," she said. "If you're not making upskilling and learning part of your core business strategy, then you're going to ultimately become uncompetitive in terms of retaining talent and delivering on your organizational outcomes." ... Daly said companies must pay more attention to the skills of their employees, including measuring and testing those proficiencies. "That's about using a combination of benchmarks, which we use at Skillsoft, that allow you, through testing, to understand the skills that you have," she said. "It's also about how you understand that capability in terms of real-world applications and measuring those skills in the context of the jobs that are being done." ... "You need to make measurement central to the business strategy, and have a program around learning, so it's part of the everyday culture of the business," she said. "From the executive level down, you need to say learning is a core part of the organization. Learning then turns up in all of your business operating frameworks in terms of how you track and measure the outcomes of programs, similar to other investments that you would make."


Sovereign AI meets Stockholm’s data center future

Sovereign AI refers to the ability of a nation to develop and operate AI platforms within its own borders, under its own laws and energy systems. ... By ensuring that sensitive data and critical compute resources remain local, sovereign AI reduces exposure to geopolitical risk, supports regulatory compliance and builds trust among both public and private stakeholders. Recent initiatives in Stockholm highlight how sovereign AI can be embedded into existing data center ecosystems. Purpose-built AI compute clusters, equipped with the latest GPU architectures, are being deployed on renewable power and integrated into local district heating networks, where excess server heat is recycled back into the city grid. These facilities are designed not only for high-performance workloads but also for long-term sustainability, aligning with Sweden’s climate and digital sovereignty goals. The strategy is clear: pair advanced AI infrastructure with domestic control and clean energy. By doing so, Stockholm can position itself as a European leader in sovereign AI, where innovation, security and sustainability converge in a way that few other markets can match. ... Stockholm’s ecosystem radiates gravitational pull. With more green, efficient and sovereign-capable data centers emerging, they attract additional clients and investments and reinforce the region’s dominance.


Agentic AI poised to pioneer the future of cybersecurity in the BFSI sector

Enter agentic AI systems that represent a network of intelligent agents having the capability for independent decision-making and adaptive learning. This extends the capabilities of traditional AI systems by incorporating autonomous decision-making and execution, while adopting proactive security measures. It is poised to revolutionise cybersecurity in the banking and financial services sector while bridging the gap between the speed of cyber-attacks and the slow, human-driven incident response. ... Agentic AI will proactively and autonomously hunt for threats across the IT systems within the financial institution by actively looking for vulnerabilities and possible threat vectors before they are exploited by threat actors. Agentic AI systems leverage their capabilities in simulation, where potential attack scenarios are modeled to identify vulnerabilities in the security posture. Data from logs, network traffic, and activities from endpoints are correlated to spot attack vectors as a part of the threat hunting process. ... AI agents have to be deployed into both customer-facing for better customer experience as well as internal systems. By establishing an agentic AI ecosystem, agents can collaborate across functions. Risk management, compliance monitoring, operational efficiency, and fraud detection functions can be streamlined, too. 


Shai-Hulud Attacks Shake Software Supply Chain Security Confidence

This isn’t the first time NPM’s reputation has been put to the test. The JavaScript community has seen a trio of supply chain attacks in rapid succession. Just recently, we saw the “manifest confusion” exploit, which tricked dependency trackers, and prior to that, a series of typosquatting and account-takeover incidents—remember the infamous “coa” and “rc” package hijacks? Now comes the latest beast from the sand: the Shai-Hulud supply chain attack. This is, depending on how you count, the third major NPM incident in recent memory—and arguably the most insidious. ... According to the detailed analysis by JFrog, attackers compromised multiple popular packages, including several that mimicked or targeted legitimate CrowdStrike modules. Before you panic: this wasn’t a direct attack on CrowdStrike itself, but the attackers were clever—by using names like “crowdstrike” and latching onto a trusted security vendor’s brand, they hoped to worm their payloads into unsuspecting production environments. ... What makes these attacks so damaging is less about the technical sophistication (though, don’t get me wrong, this one is clever) and more about how they shake our trust in the very idea of open collaboration. Every dev who’s ever typed `npm install` had to trust not just the original author, but every maintainer, every transitive dependency, and the opaque process of package publishing itself.

Daily Tech Digest - September 18, 2025


Quote for the day:

"When your life flashes before your eyes, make sure you’ve got plenty to watch.” -- Anonymous


The new IT operating model: cloud-managed networking as a strategic lever

Enterprises are navigating an environment where the complexity of IT is increasing exponentially. Hybrid work requires consistent connectivity across homes, offices, and campuses. Edge computing and IoT generate massive volumes of data at distributed sites. Security risks escalate as the attack surface grows. Traditional, hardware-centric approaches leave IT teams struggling to keep up. Managing dozens or hundreds of controllers, patching firmware manually, and troubleshooting issues site by site is not sustainable. Cloud-managed networking changes that equation. By centralizing management, applying AI-driven intelligence, and extending visibility across distributed environments, it enables IT to shift from reactive firefighting to proactive strategy. ... Enterprises adopting cloud-managed networking are making a decisive shift from complexity to clarity. Success requires more than technology alone. It demands a partner that understands how to translate advanced capabilities into measurable business outcomes. ... Cloud-managed networking is not just another IT trend. It is the operating model that will define enterprise technology for the next decade. By elevating the network from infrastructure to strategy, it enables organizations to move faster, stay secure, and innovate with confidence.


Why Shadow AI Is the Next Big Governance Challenge for CISOs

In many respects, shadow AI is a subset of a broader shadow IT problem. Shadow IT is an issue that emerged more than a decade ago, largely emanating from employee use of unauthorized cloud apps, including SaaS. Lohrmann noted that cloud access security broker (CASB) solutions were developed to deal with the shadow IT issue. These tools are designed to provide organizations with full visibility of what employees are doing on the network and on protected devices, while only allowing access to authorized instances. However, shadow AI presents distinct challenges that CASB tools are unable to adequately address. “Organizations still need to address other questions related to licensing, application sprawl, security and privacy policies, procedures and more ..,” Lohrmann noted. A key difference between IT and AI is the nature of data, the speed of adoption and the complexity of the underlying technology. In addition, AI is often integrated into existing IT systems, including cloud applications, making these tools more difficult to identify. Chuvakin added, “With shadow IT, unauthorized tools often leave recognizable traces – unapproved applications on devices, unusual network traffic or access attempts to restricted services. Shadow AI interactions, however, often occur entirely within a web browser or personal device, blending seamlessly with regular online activity or not leaving any trace on any corporate system at all.”


Cisco strengthens integrated IT/OT network and security controls

Melding IT and OT networking and security is not a new idea, but it’s one that has seen growing attention from Cisco. ... Cisco also added a new technology called AI-powered asset clustering to its Cyber Vision OT management suite. Cyber Vison keeps track of devices connected to an industrial network, builds a real-time map of how these devices talk to each other and to IT systems, and can detect abnormal behavior, vulnerabilities, or policy violations that could signal malware, misconfigurations, or insider threats, Cisco says. ... Another significant move that will help IT/OT integration is the planned integration of the management console for Cisco’s Catalyst and Meraki networks. That combination will allow IT and OT teams to see the same dashboard for industrial OT and IT enterprise/campus networks. Cyber Vision will feeds into the dashboard along with other Cisco management offerings such as ThousandEyes, which gives customers a shared inventory of assets, traffic flows and security. “What we are focusing on is helping our customers have the secure networking foundation and architecture that lets IT teams and operational teams kind of have one fabric, one architecture, that goes from the carpeted spaces all the way to the far reaches of their OT network,” Butaney said.


Global hiring risks: What you need to know about identity fraud and screening trends

Most organizations globally include criminal record checks in their pre-employment screening. Employment and education verifications are also common, especially in EMEA and APAC. ... “Employers that fail to strengthen their identity verification processes or overlook recurring discrepancy patterns could face costly consequences, from compliance failures to reputational harm,” said Euan Menzies, President and CEO of HireRight. ... More than three-quarters of businesses globally found at least one discrepancy in a candidate’s background over the past year. Thirteen percent reported finding one discrepancy for every five candidates screened. Employment verification remains the area where most inconsistencies are discovered, especially in APAC and EMEA. These discrepancies range from minor errors like incorrect dates to more serious issues such as fabricated job histories. ... Companies are increasingly adopting post-hire screening to address risks that emerge after someone is hired. In North America, only 38 percent of companies now say they do no post-hire screening, a sharp drop from 57 percent last year. Common post-hire checks include driver monitoring and periodic rescreening for regulated roles. These efforts help companies catch new issues such as undisclosed criminal activity, changes in legal eligibility to work, or evolving insider threats.


Doomprompting: Endless tinkering with AI outputs can cripple IT results

Some LLMs appear to be designed to encourage long-lasting conversation loops, with answers often spurring another prompt. ... “When an individual engineer is prompting an AI, they get a pretty good response pretty quick,” he says. “It gets in your head, ‘That’s pretty good; surely, I could get to perfect.’ And you get to the point where it’s the classic sunk-cost fallacy, where the engineer is like, ‘I’ve spent all this time prompting, surely I can prompt myself out of this hole.’” The problem often happens when the project lacks definitions of what a good result looks like, he adds. “Employees who don’t really understand the goal they’re after will spin in circles not knowing when they should just call it done or step away,” Farmer says. “The enemy of good is perfect, and LLMs make us feel like if we just tweak that last prompt a little bit, we’ll get there.” ... Govindarajan has seen some IT teams get stuck in “doom loops” as they add more and more instructions to agents to refine the outputs. As organizations deploy multiple agents, constant tinkering with outputs can slow down deployments and burn through staff time, he says. “The whole idea of doomprompting is basically putting that instruction down and hoping that it works as you set more and more instructions, some of them contradicting with each other,” he adds. “It comes at the sacrifice of system intelligence.”


Vanishing Public Record Makes Enterprise Data a Strategic Asset

“We are rapidly running out of public data that is credible and usable. More and more enterprises will start to assign value to their data and go beyond partnerships to monetize it. For example, wind measurements captured by a wind turbine company could be helpful to many businesses that are not competitors,” said Olga Kupriyanova, principal consultant of AI and data engineering at ISG. ... "We’re entering a defining moment in AI where access to reliable, scalable, and ethical data is quickly becoming the central bottleneck, and also the most valuable asset. As legal and regulatory pressure tightens access to public data, due to copyright lawsuits, privacy concerns, or manipulation of open data repositories, enterprises are being forced to rethink where their AI advantage will come from,” said Farshid Sabet, CEO and co-founder at Corvic AI, developer of a GenAI management platform. ... The economic consequences of such data loss are already visible. Analysts estimate that U.S. public data underpinned nearly $750 billion of business activity as recently as 2022, according to the Department of Commerce. The loss of such data blinds companies that build models for everything from supply chain forecasting to investment strategy and predictions.


The Architecture of Responsible AI: Balancing Innovation and Accountability

The field of AI governance suffers from what Mackenzie et al reaffirm as the “principal-agent problem,” where one party (the principal) delegates tasks to another party (the agent). But their interests are not perfectly aligned, leading to potential conflicts and inefficiencies. ... Architects occupy a unique position in this landscape. Unlike regulators who may impose constraints post-design, architects work at the intersection of possibility and constraint. They must balance competing requirements, such as performance and privacy, efficiency and equity, speed and safety, within coherent system designs. Every architectural decision must embed values, priorities, and assumptions about how systems should behave. ... current AI guidance suffers from systematic weaknesses: evidence quality is sacrificed for speed, commercial interests masquerade as objective advice, and some perspectives dominate while broader stakeholder voices remain unheard ... Architects, being well-placed to bridge the gap between strategy and technology, hold a key role in establishing the principles that govern how systems behave, interact, and evolve. In the context of AI, this principle set extends beyond technical design. It encompasses the ethical, social, and legal aspects as well. .


AI will make workers ‘busier in the future’ – so what’s the point exactly?

“I have to admit that I’m afraid to say that we are going to be busier in the future than now,” he told host Liz Claman. “And the reason for that is because a lot of different things that take a long time to do are now faster to do. I’m always waiting for work to get done because I’ve got more ideas.” ... “The more productive we are, the more opportunity we get to pursue new ideas,” Huang continued. Reading between the lines here, it seems the so-called efficiency gains afforded by AI will mean workers have more work dumped in their laps – onto the next task, no rest for the wicked, etc. Huang’s comments run counter to the prevailing sentiment among big tech executives on exactly what AI will deliver for both enterprises and individual workers. ... We’ve all read the marketing copy and heard it regurgitated by tech leaders on podcasts and keynote stages – AI will allow us to focus on the “more rewarding” aspects of our jobs. They’ve never fully explained what this entails, or how it will pan out in the workplace. To be quite honest, I don’t think they know what it means. Marketing probably made it up and they’ve stuck with it. ... Will we be busier spending time on those rewarding aspects of our jobs? I have to say, I’m doubtful. The reality is that workers will be pulled into other tasks and merely end up drowning in the same cumbersome workloads they’ve been dealing with since the pandemic.


Building Safer Digital Experiences Through Robust Testing Practices

Secure software testing forms the bedrock of resilient applications, proactively uncovering flaws before they become critical. Early testing practices can significantly reduce risks, costs, and exposure to threats. According to Global Market Insights, the growing number and size of data breaches have increased the need for security testing services. Organizations that heavily use security AI and automation save an average of USD 1.76 million compared to those that don’t. About 51% plan to increase their security spending. Early integration of techniques like Static Application Security Testing (SAST) can detect vulnerabilities in existing code. It can also help to fix bugs during development. ... Organizations must verify that their systems handle personal data securely and comply with global regulations like GDPR and CCPA. Testing ensures sensitive information is protected from leaks or unauthorized use. Americans are highly concerned about how companies use their private data. ... Stress testing evaluates how applications perform under extreme loads. It helps identify potential failures in scalability, response times, and resource management. Vulnerability assessments concentrate on uncovering security gaps. Verified Market Reports notes that, after recent financial crises, governments are putting stronger emphasis on stress testing.


Prompt Engineering Is Dead – Long Live PromptOps

PromptOps is gaining traction rapidly because it has the potential to address major challenges in the use of LLMs, such as prompt drift and suboptimal output. Yet incorporating PromptOps effectively into an organization is far from simple, requiring a structured and clear process, the right tools, and a mindset that enables collaboration and effective centralization. Digging deeper into what PromptOps is, why it is needed, and how it can be implemented effectively can help companies to find the right approach when incorporating this methodology for improving their LLM applications usage. ... Before PromptOps is implemented, an organization typically has prompts scattered across multiple teams and tools, with no structured management in place. The first stage of implementing PromptOps involves gathering every detail on LLM applications usage within an organization. It is essential to understand precisely which prompts are being used, by which teams, and with which models. The next stage is to build consistency into this practice by incorporating versioning and testing. Adding secure access control at this stage is also important, in order to ensure only those who need it have access to prompts. With these practices in place, organizations will be well-positioned to introduce cross-model design and embed core compliance and security practices into all prompt crafting. 

Daily Tech Digest - September 17, 2025


Quote for the day:

“We are all failures - at least the best of us are.” -- J.M. Barrie


AI Governance Reaches an Inflection Point

AI adoption has made privacy, compliance, and risk management dramatically more complex. Unlike traditional software, AI models are probabilistic, adaptive, and capable of generating outcomes that are harder to predict or explain. As Blake Brannon, OneTrust’s chief innovation officer, summarized: “The speed of AI innovation has exposed a fundamental mismatch. While AI projects move at unprecedented speed, traditional governance processes are operating at yesterday’s pace.” ... These dynamics explain why, several years ago, Dresner Advisory Services shifted its research lens from data governance to data and analytics (D&A) governance. AI adoption makes clear that organizations must treat governance not as a siloed discipline, but as an integrated framework spanning data, analytics, and intelligent systems. D&A governance is broader in scope than traditional data governance. It encompasses policies, standards, decision rights, procedures, and technologies that govern both data and analytic content across the organization. ... The modernization is not just about oversight — it is about rethinking priorities. Survey respondents identify data quality and controlled access as the most critical enablers of AI success. Security, privacy, and the governance of data models follow closely behind. Collectively, these priorities reflect an emerging consensus: The real foundation of successful AI is not model architecture, but disciplined, transparent, and enforceable governance of data and analytics.


Shai-Hulud Supply Chain Attack: Worm Used to Steal Secrets, 180+ NPM Packages Hit

The packages were injected with a post-install script designed to fetch the TruffleHog secret scanning tool to identify and steal secrets, and to harvest environment variables and IMDS-exposed cloud keys. The script also validates the collected credentials and, if GitHub tokens are identified, it uses them to create a public repository and dump the secrets into it. Additionally, it pushes a GitHub Actions workflow that exfiltrates secrets from each repository to a hardcoded webhook, and migrates private repositories to public ones labeled ‘Shai-Hulud Migration’. ... What makes the attack different is malicious code that uses any identified NPM token to enumerate and update the packages that a compromised maintainer controls, to inject them with the malicious post-install script. “This attack is a self-propagating worm. When a compromised package encounters additional NPM tokens in a victim environment, it will automatically publish malicious versions of any packages it can access,” Wiz notes. ... The security firm warns that the self-spreading potential of the malicious code will likely keep the campaign alive for a few more days. To avoid being infected, users should be wary of any packages that have new versions on NPM but not on GitHub, and are advised to pin dependencies to avoid unexpected package updates.


Scattered Spider Tied to Fresh Attacks on Financial Services

The financial services sector appears to remain at high risk of attack by the group. Over the past two months, elements of Scattered Spider registered "a coordinated set of ticket-themed phishing domains and Salesforce credential harvesting pages" designed to target the financial services sector as well as providers of technology services, suggesting a continuing focus on those sectors, ReliaQuest said. Registering lookalike domain names is a repeat tactic used by many attackers, from Chinese nation-state groups to Scattered Spider. Such URLs are designed to trick victims into thinking a link that they visit is legitimate. ... Members of Scattered Spider and ShinyHunters excel at social engineering, including voice phishing, aka vishing. This often involves tricking a help desk into believing the attacker is a legitimate employee, leading to passwords being reset and single sign-on tokens intercepted. In some cases, experts say, the attackers trick a victim into visiting lookalike support panels they've created which are part of a phishing attack. Since the middle of the year, members of Scattered Spider have breached British retailers Marks & Spencer, followed by American retailers such as Adidas and Victoria's Secret. The group has been targeting American insurers such as Aflac and Allianz Life, global airlines including Air France, KLM and Qantas, and technology giants Cisco and Google.


Tech’s Tarnished Image Spurring Rise of Chief Trust Officers

In today’s highly competitive world, organizations need every advantage they can get, which can include trust. “Part of selecting vendors, whether it is an official part of the process or not, is evaluating the trust you have in that vendor,” explained Erich Kron ... “By signifying someone in a high level of leadership as the person responsible and accountable for culminating and maintaining that level of trust, the organization may gain significant competitive advantages through loyalty and through competitive means,” he told TechNewsWorld. “The chief trust officer role is a visible, external and internal sign of an organization’s commitment to trust,” added Jim Alkove. ... “It’s an explicit statement of intent to your employees, to your customers, to your partners, to governments that your company cares so much about trust and that you’ve announced that there’s a leader responsible for it,” Alkove, a former CTrO at Salesforce, told TechNewsWorld. ... Forrester noted that trust has become a revenue problem for B2B software companies, and CTrOs provide a means to resolve issues that could stall deals and impact revenue. “When procurement and third-party risk management teams identified issues with a business partner’s cybersecurity posture, contracts stalled,” the report explained. “These issues reflected on the competence, consistency, and dependability of the potential partner. Chief trust officers and their teams step in to remove those obstacles and move deals along.”


AI ROI Isn't About Cost Savings Anymore

The traditional metrics of ROI, including cost savings, headcount reduction and revenue uplift, are no longer sufficient. Let's start with the obvious challenge: ROI today is often measured vertically, at the use-case or project level, tracking model accuracy or incremental sales. Although necessary, this vertical lens misses the broader picture. What's needed is a horizontal perspective on ROI - metrics that capture how investments in cloud infrastructure, data engineering and cross-silo integration accelerate every subsequent AI initiative. ... When data is cleaned and standardized for one use case, the next model development becomes faster and more reliable. Yet these productivity gains rarely appear in ROI calculations. The same applies to interoperability across functions. For example, predictive models developed for finance may inform HR or marketing strategies, multiplying AI's value in ways traditional KPIs overlook. ... Emerging models, such as Gartner's multidimensional AI measurement frameworks, and India's evolving AI governance standards offer early guidance. But turning them into practice requires rigor - from assessing how data improvements accelerate downstream use cases to quantifying cross-team synergies, and even recognizing softer outcomes like trust and employee well-being. "AI is neither hype nor savior - it is a tool," Gupta said.


How a fake ICS network can reveal real cyberattacks

Most ICS honeypots today are low interaction, using software to simulate devices like programmable logic controllers (PLCs). These setups are useful for detecting basic threats but are easy for skilled attackers to identify. Once attackers realize they are interacting with a decoy, they stop revealing their tactics. ... ICSLure takes a different approach. It combines actual PLC hardware with realistic simulations of physical processes, such as the movement of machinery on a factory floor. This creates what the researchers call a very high interaction environment. For attackers, ICSLure feels like a live industrial network. For defenders, it provides more accurate data about how adversaries move inside an ICS environment and the techniques they use to disrupt operations. Angelo Furfaro, one of the researchers behind ICSLure, told Help Net Security that deploying this type of environment safely requires careful planning. “The honeypot infrastructure must be completely segregated from any production network through dedicated VLANs, firewalls, and demilitarized zones, ensuring that malicious activity cannot spill over into critical operations,” he said. “PLCs should only interact with simulated plants or digital twins, eliminating the possibility of executing harmful commands on physical processes.”


The Biggest Barriers Blocking Agentic AI Adoption

To achieve the critical mass of adoption needed to fuel mainstream adoption of AI agents, we have to be able to trust them. This is true on several levels; we have to trust them with the sensitive and personal data they need to make decisions on our behalf, and we have to trust that the technology works, our efforts aren’t hampered by specific AI flaws like hallucinations. And if we are trusting it to make serious decisions, such as buying decisions, we have to trust that it will make the right ones and not waste our money. ... Another problem is that agentic AI relies on the ability of agents to interact and operate with third-party systems, and many third-party systems aren’t set up to work with this yet. Computer-using agents (such as OpenAI Operator and Manus AI) circumvent this by using computer vision to understand what’s on a screen. This means they can use many websites and apps just like we can, whether or not they’re programmed to work with them. ... Finally, there are wider cultural concerns that go beyond technology. Some people are uncomfortable with the idea of letting AI make decisions for them, regardless of how routine or mundane those decisions may be. Others are nervous about the impact that AI will have on jobs, society or the planet. These are all totally valid and understandable concerns and can’t be dismissed as barriers to be overcome simply through top-down education and messaging.


The Legal Perils of Dark Patterns in India: Intersection between Data Privacy and Consumer Protection

Dark patterns are any deceptive design pattern using UI or UX that misleads or tricks users by subverting their autonomy and manipulating them into taking actions which otherwise they would not have taken. Coined by UX designer Harry Brignull, who registered a website called darkpatterns.org, which he intended to be designed like a library wherein all types of such UX/UI designs are showcased in public interest, hence the name “dark pattern” came into being. ... The CCPA can order for recall goods, withdraw services or even stop such services in instance it finds that an entity is engaging in dark pattern as per Section 20 of the CP Act, in instance of breach of guidelines. ... By their very design, some patterns harm the user in two ways: first, by manipulating them into choices they would not have otherwise made; and second, by compelling the collection or processing of personal data in ways that breach data protection requirements. In such cases, the entity is not only exploiting the individual but is also failing to meet its legal duties under the DPDPA thereby creating exposure under both the CP Act and the DPDPA. ... Under the DPDPA, the stakes are now significantly higher. The Data Protection Board of India has the authority to impose financial penalties of up to Rs 50 crores for not obtaining purposeful consent or for disregarding technical and organisational measures.


In Order to Scale AI with Confidence, Enterprise CTOs Must Unlock the Value of Unstructured Data

Over the past two years, we’ve witnessed rapid advancements in Large Language Models (LLMs). As these models become increasingly powerful–and more commoditized–the true competitive edge for enterprises will lie in how effectively they harness their internal data. Unstructured content forms the foundation of modern AI systems, making it essential for organizations to build strong unstructured data infrastructure to succeed in the AI-driven era. This is what we mean by an unstructured data foundation: the ability for companies to rapidly identify what unstructured data exists across the organization, assess its quality, sensitivity, and safety, enrich and contextualize it to improve AI performance, and ultimately create a governed system for generating and maintaining high-quality data products at scale. In 2025, unstructured data is as much about quality as it is about quantity. “Quality” in the context of unstructured data remains largely uncharted territory. Companies need clear frameworks to assess dimensions like relevance, freshness, and duplication. Over the past six years, the volume and variety of unstructured data–and the number of AI applications that generate or depend on it–have exploded. Many have called it the largest and most valuable source of data within an organization, and I’d agree–especially as AI becomes increasingly central to how enterprises operate. Here’s why.


Scaling Databases for Large Multi-Tenant Applications

Building and maintaining multi-tenant database applications is one of the more challenging aspects of being a developer, administrator or analyst. Until the debut of AI systems, with their power-hungry GPUs, database workloads represented the most expensive workloads because of their demands on memory, CPU and storage performance to work effectively. ... Sharding is a data management technique that effectively partitions data across multiple databases. At its center, you need something that I like to call a command and control database. Still, I've also seen it called a shard-map manager or a router database. This database contains the metadata around the shards and your environment, and routes application calls to the appropriate shard or database. ... If you are working on the Microsoft stack, I'm going to give a shout out to elastic database tools . This .NET library gives you all the tools like shard-map management, the ability to do data-dependent routing, and doing multi-shard queries as needed. Additionally, consider the ability to add and remove shards to match shifting demands. ... Some other tooling you need to think about in planning, are how to execute schema changes across your partitions. Database DevOps is a mature practice, but rolling out changes across is fleet of databases requires careful forethought and operations.