Showing posts with label authentication. Show all posts
Showing posts with label authentication. Show all posts

Daily Tech Digest - February 09, 2026


Quote for the day:

"Leaders who make their teams successful are followed even through the hardest journeys." -- Gordon Tredgold



Agentic AI upends SaaS models & sparks valuation shock

The Software-as-a-Service market is moving away from seat-based licensing as agentic artificial intelligence tools change how companies build and purchase business software, according to analysts and industry executives. Investors have already reacted to the shift. A broad sell-off in software stocks followed recent advances in agentic technology, raising questions regarding the durability of current business models. Concerns persist that traditional revenue streams may be at risk as autonomous systems perform increasing volumes of work with fewer human users. ... Not every vendor is well positioned for the transition. Industry observers are using the term "zombie SaaS" for companies that raised large rounds at peak valuations from 2020 to 2022 and now trade or transact below the total capital invested. These businesses often face a mismatch between historical expectations and current demand. They can struggle to raise new funding and may lack the growth rate needed to justify earlier valuations. Meanwhile, newer entrants can build competing products faster and at lower cost, increasing pressure on incumbents with larger cost structures. ... AI is also reshaping procurement decisions. Some companies are shifting toward internal tools as non-technical teams gain access to systems that generate software from natural-language prompts and templates. Industry discussion points to Ramp building internal revenue tools and AI agents in place of third-party software. 


Software developers: Prime cyber targets and a rising risk vector for CISOs

Attackers are increasingly targeting the tools, access, and trusted channels used by software developers rather than simply exploiting application bugs. The threats blend technical compromise — malicious packages, development pipeline abuse, etc. — with social engineering and AI-driven attacks. ... The tokens, API keys, cloud credentials, and CI/CD secrets held by software developers unlock far broader access than a typical office user account, making software engineers a prime target for cybercriminals. “They [developers] hold the keys to the kingdom, privileged access to source code and cloud infrastructure, making them a high-value target,” Wood adds. ... Attackers aren’t just looking for flaws in code — they’re looking for access to software development environments. Common security shortcomings, including overprivileged service accounts, long-lived tokens, and misconfigured pipelines, offer a ready means for illicit entry into sensitive software development environments. “Improperly stored access credentials are low-hanging fruit for even the most amateur of threat actors,” says Crystal Morin, senior cybersecurity strategist at cloud-native security and observability vendor Sysdig. ... AI-assisted development and “vibe coding” are increasing exposure to risk, especially because such code is often generated quickly without adequate testing, documentation, or traceability.


How network modernization enables AI success and quantum readiness

In essence, inadequate networks limit the ability of AI “blood” to nourish the body of an organization — weakening it and stifling its growth. Many enterprise networks developed incrementally over time, with successive layers of technology implemented over time. Mergers, divestitures, and one-off projects to solve immediate problems have left organizations with a patchwork of architectures, vendors and configurations. ... As AI traffic increases across data centers, clouds, and the edge, blind spots multiply. Once-manageable technical debt becomes an active security liability, expanding the attack surface and undermining Zero Trust initiatives as AI-driven traffic increases. ... Quantum computers could break today’s encryption standards, exposing sensitive financial, healthcare and operational data. Worse, attackers are already engaging in “harvest now, decrypt later” strategies — stealing encrypted data today to exploit tomorrow. The relevance to networking and AI issues is straightforward. Preparing for the challenges (and opportunities) of quantum computing will be an incremental, multi-year project that needs to start now. Enterprise IT infrastructures must be able to adapt and scale to quantum computing developments as they evolve. Companies will need to be able to “skate to where the puck will be,” and then skate again! While becoming quantum-safe may seem daunting, organizations don’t have to do it all at once. 


Rethinking next-generation OT SOC as IT/OT convergence reshapes industrial cyber defense

Clear gains from next-generation OT SOC innovation emerge across real-world applications, such as OT-aware detection, AI-assisted triage, and distributed SOC models designed to reflect the day-to-day realities of operating critical infrastructure. ... The line between what is OT and what is IT is blurred. Each customer, scenario, and request proposal shows a unique fingerprint of architectural, process, and industry-related concerns. Our OT SOC development program integrated industrial network sensors with enterprise SOC, enabling holistic monitoring of plants and offices together. ... Risk is no longer discussed purely from a cyber perspective, but in terms of operational impact, safety, and reliability, which is more consequence-driven. When convergence is implemented securely, alerts are no longer investigated in isolation; identity, remote access activity, asset criticality, and process context are correlated together. ... From a practical standpoint, Mashirova said that automation delivers the most operational value in enrichment, correlation, prioritization, and workflow orchestration. “Automating asset context, vulnerability risk prioritization with remediation recommendations, alert deduplication, and escalation logic dramatically improves analyst efficiency without directly impacting the industrial process. AI agents can act as SOC assistants by correlating large volumes of data and providing decision support to analysts.”


Shai-hulud: The Hidden Cost of Supply Chain Attacks

In recent months, a somewhat novel supply chain threat has emerged against the open source community; attackers are unleashing self-propagating malware on component libraries and targeting downstream victims with infostealers. The most famous recent example of this is Shai-hulud, a worm targeting NPM projects that would take hold when a victim downloads a poisoned component. Once on a victim machine, the malware used its access to infect components that the victim maintains before self-publishing poisoned versions. ... Another consideration is long-term, lasting damage from these incidents. Sygnia's Kidron explains that the impact of a compromise like credential theft happens on a wider time scale. If the issue has not been adequately contained, attackers can sell access or use it for follow-on activity later. "In practice, damage unfolds across time frames. Immediately — within hours to the first few days after exposure, the primary risk is credential exposure: these campaigns are designed to execute inside developer and CI/CD paths where tokens and secrets are accessible," he says. "When those secrets leak, the downstream harm is not abstract — the attacker can use them (or sell them) to authenticate as the victim and access private repositories, pull data, tamper with code, trigger builds, publish packages, access cloud resources, or perform actions “on behalf” of legitimate identities." 


United Airlines CISO on building resilience when disruption is inevitable

Modernization in aviation is less about speed and more about precision. Every change must measurably improve safety, reliability, or resilience. Cybersecurity must respect that bar. ... Cyber risk is assessed in terms of how it affects the ability to move aircraft, crew, and passengers safely and on time. It also means cybersecurity leaders must understand the business end-to-end. You cannot protect an airline effectively without understanding flight operations, maintenance, weather, crew scheduling, and regulatory constraints. Cybersecurity becomes an enabler of safe operations, not a separate technical function. ... Risk assessment goes beyond vendor questionnaires. It includes scenario analysis, operational impact modeling, and close coordination with partners, regulators, and industry groups. Information sharing is essential, because early awareness often matters more than perfect control. Ultimately, we assume some disruptions will originate externally. The goal is to detect them quickly, understand their operational impact, and adapt without compromising safety. Resilience and coordination are just as important as contractual controls. ... Speed matters, but clarity matters more. We also plan extensively in advance. You cannot improvise under pressure when aircraft and passengers are involved. Clear playbooks, rehearsals, and defined decision authorities allow teams to act decisively while staying aligned with safety principles.


Securing IoT devices: why passwords are not enough

Traditional passwords are often not secure enough for technological devices or systems. Many consumers use the default password that comes with the system rather than changing it to a more secure one. When people update their passwords, they often choose weak ones that are easy for cyberattackers to crack. The volume of IoT devices makes manual password management inefficient and risky. A primary threat is the lack of encryption as data travels between networks. When multiple devices are connected, encryption is key to protecting information. Another threat is poor network segmentation, which means connected devices are misconfigured or less secure. ... Adopting a zero-trust methodology is a better cybersecurity measure than traditional password-based systems. IoT devices can still require a password, but the system may ask for additional information to verify the user’s authorization. Users can set up passkeys, security questions or other methods as the next step after entering a password. ... AI can be used both offensively and defensively in cybersecurity for IoT devices. Hackers use AI to launch advanced attacks, but users can also implement AI to detect suspicious behaviour and address threats. Consumers can purchase AI security systems to safeguard their IoT devices beyond passwords, but they must remain vigilant and continuously monitor their usage to prevent cyberattackers from infiltrating them.


Creating a Top-Down and Bottom-Up Grounded Capability Model

A grounded capability model is a complete and stable set of these capabilities, structured in levels from level 1 to sometimes level 4 so senior leaders, middle managers, architects, and digital transformation managers can see the business as an integrated whole. The “grounded” part matters: it means the model reflects strategy and business design, not the quirks of today’s org chart or application portfolio. ... Business Architecture Info emphasizes that a grounded capability model is best built by combining top-down strategic direction with bottom-up operational reality. The top-down view ensures the model is aligned to the business plan and strategic goals, while the bottom-up view ensures it is validated against real value streams, objectives, and subject-matter expertise. ... Top-down capability modeling needs the right stakeholders and the right strategic inputs. On the stakeholder side, senior leaders are essential because they own direction, priorities, and the definition of “what good looks like.” The EA team, enterprise architects and business architects, translates that direction into a structured capability view. ... Bottom-up capability modeling grounds the model in delivery and operational truth. It relies heavily on middle managers, subject matter experts, and business experts. In other words, people who know how value is produced, where friction exists, and what “enablement” really takes. The EA team remains a key facilitator and modeler, but validation and discovery come from the business.


Secure The Path, Not The Chokepoint

The argument here is simple: baseline security policy should be enforced along the path where packets already travel. Programmable data planes, particularly P4 on programmable switching targets, make it possible to enforce meaningful guardrails at line rate, close to the workload, without redesigning the network into a set of security detours. ... When enforcement is concentrated on a few devices, the architecture depends on traffic detours or assumptions about where traffic flows. That creates three practical problems: First, important east west traffic may never traverse an inspection point. Second, response actions often depend on where a firewall sits rather than where the attacker is operating. Third, changes become slow and risky because every new workload pattern becomes another exception. ... A fabric first model succeeds when it focuses on controls that are simple, universal, and have a high impact. ... A fabric first approach does not remove the need for firewalls. Deep application inspection, proxy functions, content controls, and specialized policy workflows still make sense where rich context exists and where inspection overhead is acceptable. The shift is about default placement. Baseline guardrails and rapid containment belong in the fabric. ... A small set of metrics usually tells the story clearly: time from detection to enforced containment, reduction in unintended internal connection attempts, and time to produce a credible incident narrative during review.


Banks Face Dual Authentication Crisis From AI Agents

Traditional authentication relies upon point-in-time verification like MFA and a password, after which access is granted. Over the years, banks have analyzed human spending patterns. But AI agents purchasing around the clock and seeking optimal deals have rendered that model obsolete. "With autonomous agents transacting on behalf of users, the distinction between legitimate and fraudulent activity is blurred, and a single compromised identity could trigger automated losses at scale," said Ajay Patel, head of agentic commerce at Prove. ... But before banks can address the authentication problem, they need to fix their data infrastructure, said Carey Ransom, managing director at BankTech Ventures. AI agents need clean, contextually appropriate data, banks don't yet have standardized ways to provide it. So, when mistakes occur, who is at fault, and who is liable for making things right? When AI agents can spawn sub-agents that delegate tasks to other AI systems throughout a transaction chain, the liability question gets murky. ... Layered authentication that balances security with the speed will reduce agentic AI valuable risks, Ransom said. "Variant transaction requests might require a new layer or type of authentication to ensure it is legitimate and reflecting the desired activity," he said. "Checks and balances will be a prevailing approach to protect both sides, while still enabling the autonomy and efficiency the market desires."

Daily Tech Digest - February 06, 2026


Quote for the day:

"When you say my team is no good, all I hear is that I failed as a leader." -- Gordon Tredgold



Everyone works with AI agents, but who controls the agents?

Over the past year, there has been a lot of talk about MCP and A2A, protocols that allow agents to communicate with each other. But more and more agents that are now becoming available support and use them. Agents will soon be able to easily exchange information and transfer tasks to each other to achieve much better results. Currently, 50 percent of AI agents in organizations still work as a silo. This means that no context or data from external systems is added. The need for context is now clear to many organizations. 96 percent of IT decision-makers understand that success depends on seamless integration. This puts renewed pressure on data silos and integrations. ... For IT decision-makers wondering what they really need to do in 2026, doing nothing is definitely not the right answer, as your competitors who do invest in AI will quickly overtake you. On the other hand, you don’t have to go all-in and blow your entire IT budget on it. ... You need to start now, so start small. Putting the three or five most frequently asked questions to your customer service or HR team into an AI agent can take a huge workload off those teams. There are now several case studies showing that this has reduced the number of tickets by as much as 50-60 percent. AI can also be used for sales reports or planning, which currently takes employees many hours each week.


Mobile privacy audits are getting harder

Many privacy reviews begin with static analysis of an Android app package (APK). This can reveal permissions requested by the app and identify embedded third-party libraries such as advertising SDKs, telemetry tools, or analytics components. Requested permissions are often treated as indicators of risk because they can imply access to contacts, photos, location, camera, or device identifiers. Library detection can also show whether an app includes known trackers. Yet, static results are only partial. Permissions may never be used in runtime code paths, and libraries can be present without being invoked. Static analysis also misses cases where data is accessed indirectly or through system behavior that does not require explicit permissions. ... Apps increasingly defend against MITM using certificate pinning, which causes the app to reject traffic interception even if a root certificate is installed. Analysts may respond by patching the APK or using dynamic instrumentation to bypass the pinning logic at runtime. Both approaches can fail depending on the app’s implementation. Mopri’s design treats these obstacles as expected operating conditions. The framework includes multiple traffic capture approaches so investigators can switch methods when an app resists a specific setup. ... Raw network logs are difficult to interpret without enrichment. Mopri adds contextual information to recorded traffic in two areas: identifying who received the data, and identifying what sensitive information may have been transmitted.


When the AI goes dark: Building enterprise resilience for the age of agentic AI

Instead of merely storing data, AI accumulates intelligence. When we talk about AI “state,” we’re describing something fundamentally different from a database that can be rolled back. ... Lose this state, and you haven’t just lost data. You’ve lost the organizational intelligence that took hundreds of human days of annotation, iteration and refinement to create. You can’t simply re-enter it from memory. Worse, a corrupted AI state doesn’t announce itself the way a crashed server does. ... This challenge is compounded by the immaturity of the AI vendor landscape. Hyperscale cloud providers may advertise “four nines” of uptime (99.99% availability, which translates to roughly 52 minutes of downtime per year), but many AI providers, particularly the startups emerging rapidly in this space, cannot yet offer these enterprise-grade service guarantees. ... When AI agents handle customer interactions, manage supply chains, execute financial processes and coordinate operations, a sustained AI outage isn’t an inconvenience. It’s an existential threat. ... Humans are not just a fallback option. They are an integral component of a resilient AI-native enterprise. Motivated, trained and prepared teams can bridge gaps when AI fails, ensuring continuity of both systems and operations. When you continually reduce your workforce to appease your shareholders, will your human employees remain motivated, trained and prepared?


The blind spot every CISO must see: Loyalty

The insider who once seemed beyond reproach becomes the very vector through which sensitive data, intellectual property, or operational integrity is compromised. These are not isolated failures of vetting or technology; they are failures to recognize that loyalty is relational and conditional, not absolute. ... Organizations have long operated under the belief that loyalty, once demonstrated, becomes a durable shield against insider risk. Extended tenure is rewarded with escalating access privileges, high performers are granted broader system rights without commensurate behavioral review, and verbal affirmations of commitment are taken at face value. Yet time and again patterns repeat. What begins as mutual confidence weakens not through dramatic betrayal but through subtle realignments in personal commitment. An employee who once identified strongly with the mission may begin to feel undervalued, overlooked for advancement, or weighed down by outside pressures. ... Positions with access to crown jewels — sensitive data, financial systems, or personnel records — or executive ranks inherently require proportionately more oversight, as regulated sectors have shown. Professionals in these roles accept this as part of the terrain, with history demonstrating minimal talent loss when frameworks are transparent and supportive.


Researchers Warn: WiFi Could Become an Invisible Mass Surveillance System

Researchers at the Karlsruhe Institute of Technology (KIT) have shown that people can be recognized solely by recording WiFi communication in their surroundings, a capability they warn poses a serious threat to personal privacy. The method does not require individuals to carry any electronic devices, nor does it rely on specialized hardware. Instead, it makes use of ordinary WiFi devices already communicating with each other nearby.  ... “This technology turns every router into a potential means for surveillance,” warns Julian Todt from KASTEL. “If you regularly pass by a café that operates a WiFi network, you could be identified there without noticing it and be recognized later, for example by public authorities or companies.” Felix Morsbach notes that intelligence agencies or cybercriminals currently have simpler ways to monitor people, such as accessing CCTV systems or video doorbells. “However, the omnipresent wireless networks might become a nearly comprehensive surveillance infrastructure with one concerning property: they are invisible and raise no suspicion.” ... Unlike attacks that rely on LIDAR sensors or earlier WiFi-based techniques that use channel state information (CSI), meaning measurements of how radio signals change when they reflect off walls, furniture, or people, this approach does not require specialized equipment. Instead, it can be carried out using a standard WiFi device.


Is software optimization a lost art?

Almost all of us have noticed apps getting larger, slower, and buggier. We've all had a Chrome window that's taking up a baffling amount of system memory, for example. While performance challenges can vary by organization, application and technical stacks, it appears the worst performance bottlenecks have migrated to the ‘last mile’ of the user experience, says Jim Mercer ... “While architectural decisions and developer skills remain critical, they’re too often compromised by the need to integrate AI and new features at an exponential pace. So, a lack of due diligence when we should know better.” ... The somewhat concerning part is that AI bloat is structurally different from traditional technical debt, she points out. Rather than accumulated cruft over time, it usually manifests as systematic over-engineering from day one. ... Software optimization has become even more important due to the recent RAM price crisis, driven by surging demand for hardware to meet AI and data center buildout. Though the price increases may be levelling out, RAM is now much more expensive than it was mere months ago. This is likely to shift practices and behavior, Brock ... Security will play a role too, particularly with the growing data sovereignty debate and concerns about bad actors, she notes. Leaner, neater, shorter software is simply easier to maintain – especially when you discover a vulnerability and are faced with working through a massive codebase.


The ‘Super Bowl’ standard: Architecting distributed systems for massive concurrency

In the world of streaming, the “Super Bowl” isn’t just a game. It is a distributed systems stress test that happens in real-time before tens of millions of people. ... It is the same nightmare that keeps e-commerce CTOs awake before Black Friday or financial systems architects up during a market crash. The fundamental problem is always the same: How do you survive when demand exceeds capacity by an order of magnitude? ... We implement load shedding based on business priority. It is better to serve 100,000 users perfectly and tell 20,000 users to “please wait” than to crash the site for all 120,000. ... In an e-commerce context, your “Inventory Service” and your “User Reviews Service” should never share the same database connection pool. If the Reviews service gets hammered by bots scraping data, it should not consume the resources needed to look up product availability. ... When a cache miss occurs, the first request goes to the database to fetch the data. The system identifies that 49,999 other people are asking for the same key. Instead of sending them to the database, it holds them in a wait state. Once the first request returns, the system populates the cache and serves all 50,000 users with that single result. This pattern is critical for “flash sale” scenarios in retail. When a million users refresh the page to see if a product is in stock, you cannot do a million database lookups. ... You cannot buy “resilience” from AWS or Azure. You cannot solve these problems just by switching to Kubernetes or adding more nodes.


Cloud-native observability enters a new phase as the market pivots from volume to value

“The secret in the industry is that … all of the existing solutions are motivated to get people to produce as much data as possible,” said Martin Mao, co-founder and chief executive officer of Chronosphere, during an interview with theCUBE. “What we’re doing differently with logs is that we actually provide the ability to see what data is useful, what data is useless and help you optimize … so you only keep and pay for the valuable data.” ... Widespread digital modernization is driving open-source adoption, which in turn demands more sophisticated observability tools, according to Nashawaty. “That urgency is why vendor innovations like Chronosphere’s Logs 2.0, which shift teams from hoarding raw telemetry to keeping only high-value signals, are resonating so strongly within the open-source community,” he said. ... Rather than treating logs as an add-on, Logs 2.0 integrates them directly into the same platform that handles metrics, traces and events. The architecture rests on three pillars. First, logs are ingested natively and correlated with other telemetry types in a shared backend and user interface. Second, usage analytics quantify which logs are actually referenced in dashboards, alerts and investigations. Third, governance recommendations guide teams toward sampling rules, log-to-metric conversion or archival strategies based on real usage patterns.


How recruitment fraud turned cloud IAM into a $2 billion attack surface

The attack chain is quickly becoming known as the identity and access management (IAM) pivot, and it represents a fundamental gap in how enterprises monitor identity-based attacks. CrowdStrike Intelligence research published on January 29 documents how adversary groups operationalized this attack chain at an industrial scale. Threat actors are cloaking the delivery of trojanized Python and npm packages through recruitment fraud, then pivoting from stolen developer credentials to full cloud IAM compromise. ... Adversaries are shifting entry vectors in real-time. Trojanized packages aren’t arriving through typosquatting as in the past — they’re hand-delivered via personal messaging channels and social platforms that corporate email gateways don’t touch. CrowdStrike documented adversaries tailoring employment-themed lures to specific industries and roles, and observed deployments of specialized malware at FinTech firms as recently as June 2025. ... AI gateways excel at validating authentication. They check whether the identity requesting access to a model endpoint or training pipeline holds the right token and has privileges for the timeframe defined by administrators and governance policies. They don’t check whether that identity is behaving consistently with its historical pattern or is randomly probing across infrastructure.


The Hidden Data Access Crisis Created by AI Agents

As enterprises adopt agents at scale, a different approach becomes necessary. Instead of having agents impersonate users, agents retain their own identity. When they need data, they request access on behalf of a user. Access decisions are made dynamically, at the moment of use, based on human entitlements, agent constraints, data governance rules, and intent (purpose). This shifts access from being identity-driven to being context-driven. Authorization becomes the primary mechanism for controlling data access, rather than a side effect of authentication. ... CDOs need to work closely with IAM, security, and platform operations teams to rethink how access decisions are made. In particular, this means separating authentication from authorization and recognizing that impersonation is no longer a sustainable model at scale. Authentication teams continue to establish trust and identity. Authorization mechanisms must take on the responsibility of deciding what data should be accessible at query time, based on the human user, the agent acting on their behalf, the data’s governance rules, and the purpose of the request. ... CDOs must treat data provisioning as an enterprise capability, not a collection of tactical exceptions. This requires working across organizational boundaries. Authentication teams continue to establish trust and identity. Security teams focus on risk and enforcement. Data teams bring policy and governance context. 

Daily Tech Digest - January 08, 2026


Quote for the day:

“When opportunity comes, it’s too late to prepare.” -- John Wooden



All in the Data: The State of Data Governance in 2026

For years, Non-Invasive Data Governance was treated as the “nice” approach — the softer way to apply discipline without disruption. But 2026 has rewritten that narrative. Now, NIDG is increasingly seen as the only sustainable way to govern data in a world of continuous transformation. Traditional “assign people to be stewards” approaches simply cannot keep up with agentic AI, edge analytics, real-time data products, and the modern demand for organizational agility. ... Governance becomes the spark that ignites faster value, safer AI, more confident decision-making, and a culture that welcomes transformation instead of bracing for it. This catalytic effect is why organizations that embrace “The Data Catalyst³” in 2026 are not merely improving — they are accelerating, compounding their gains, and outpacing peers who still treat governance as a slow, procedural necessity rather than the engine of modern data excellence. ... This year, metadata is no longer an afterthought. It is the bloodstream of governance. Organizations are finally acknowledging that without shared understanding, consistent definitions, and a reliable inventory of where data comes from and who touches it, AI will hallucinate confidently while leaders make decisions blindly. ... Perhaps the greatest evolution in 2026 is the rise of governance that keeps pace with AI. Organizations can no longer review policies once a year or update data inventories only during budget cycles. Decision cycles are compressing. Change windows are shrinking. 


The Next Two Years of Software Engineering

AI unlocks massive demand for developers across every industry, not just tech. Healthcare, agriculture, manufacturing, and finance all start embedding software and automation. Rather than replacing developers, AI becomes a force multiplier that spreads development work into domains that never employed coders. We’d see more entry-level roles, just different ones: “AI-native” developers who quickly build automations and integrations for specific niches. ... Position yourself as the guardian of quality and complexity. Sharpen your core expertise: architecture, security, scaling, domain knowledge. Practice modeling systems with AI components and think through failure modes. Stay current on vulnerabilities in AI-generated code. Embrace your role as mentor and reviewer: define where AI use is acceptable and where manual review is mandatory. Lean into creative and strategic work; let the junior+AI combo handle routine API hookups while you decide which APIs to build. ... Lean into leadership and architectural responsibilities. Shape the standards and frameworks that AI and junior team members follow. Define code quality checklists and ethical AI usage policies. Stay current on compliance and security topics for AI-produced software. Focus on system design and integration expertise; volunteer to map data flows across services and identify failure points. Get comfortable with orchestration platforms. Double down on your role as technical mentor: more code reviews, design discussions, technical guidelines.


What will IT transformation look like in 2026, and how do you know if you're on the right track?

The IT organization will become the keeper of the journal in terms of business value, and a lot of organizations haven't developed those muscles yet. ... Technical complexity remains a huge challenge. Back-end systems are becoming more complicated, requiring stronger architecture frameworks, faster design cycles and reliable data access to support emerging agentic AI frameworks. ... "Many IT organizations have taken the easy way," said de la Fe, referring to cloud and application service providers. As a result, their data is spread across different environments. Organizations may technically own their data, he said, but "it isn't with them -- or architected in a manner where they can access and use it as they may need to." ... "They believe it's a period of architectural redux because applications are becoming more heterogeneous," Vohra said. "Their architecture must be more modular and open, but they can't simply say no to core applications, because the business will demand them. They must be more responsive to the business than ever before." ... Without business-IT alignment, IT cannot deliver the business impact the organization now expects. CIOs are under increasing pressure from senior leadership and boards to improve efficiency and deliver business value, as measured in business KPIs rather than traditional IT KPIs. On the technology side, CIOs also need to ensure they are architecting for the future. 


Why CISOs Must Adopt the Chief Risk Officer Playbook

As the threat landscape becomes increasingly complex due to AI acceleration, shifting regulations, and geopolitical volatility, the role of the security leader is evolving. For CISOs and their teams, the McKinsey research provides a blueprint for transforming from technical gatekeepers into strategic risk leaders. ... A common question in the industry is whether a company needs both a Chief Risk Officer and a Chief Information Security Officer (CISO). ... Understanding the difference in what these two leaders look for is key to collaboration. Primary goal for CRO: Protect the organization's financial health and long-term viability. Primary goal for the CISO: Protect the confidentiality, integrity, and availability of digital assets. Key metric for CRO: Risk-adjusted return on capital and insurance premium outcomes. Key metric for CISO: Mean time to detect (MTTD), threat actor activity, and control effectiveness. Focus area for CRO: Market shifts, credit risk, geopolitical crises, and supply chain fragility. Focus area for CISO: Vulnerabilities, phishing campaigns, ransomware, and insider threats. Outcome for CRO: Ensuring the business can survive any "bad day," financial or otherwise. Outcome for CISO: Ensuring the digital infrastructure is resilient against constant attack. ... The next generation of cybersecurity leaders will not just be the ones who can write the best code or configure the tightest firewall. They will be the ones who can walk into a boardroom, speak the language of the CRO, and explain how a specific technical risk impacts the organization's bottom line.


Passwords are where PCI DSS compliance often breaks down

CISOs often ask where password managers fit within the PCI DSS language. The standard does not mandate specific technologies, but it defines outcomes that password managers help achieve. Requirement 8 focuses on identifying users and authenticating access. Unique credentials and protection of authentication factors are core expectations. Requirement 12.6 addresses security awareness. Training must reflect real risks and employee responsibilities. Demonstrating that employees are trained to use approved credential management tools strengthens assessment evidence. Self-assessment questionnaires reinforce this operational focus. They ask how credentials are handled, how access is reviewed, and how training is documented, pushing organizations to demonstrate process rather than policy. ... “Security leaders want to know who accessed what and when. That visibility turns password management from a convenience feature into a control.” ... Culture shows up in small choices. Whether employees ask before sharing access. Whether they trust approved tools. Whether security feels like support or friction. PCI DSS 4.x pushes organizations to take those signals seriously. Passwords sit at the center of that shift because they touch every system and every user. Training alone does not change behavior. Tools alone do not create understanding. 


AI Demand and Policy Shifts Redraw Europe’s Data Center Map for 2026

Rising demand for AI, particularly large language models (LLMs) and generative AI, is driving the need for large-scale GPU clusters and advanced infrastructure. The EU's forthcoming Cloud and AI Development Act aims to triple the region's data center processing capacity within five to seven years, with streamlined approvals and public funding for energy-efficient facilities expected to stimulate growth. ... “We expect to see a strategic bifurcation,” Lamb said, with FLAP-D metros continuing to attract latency-sensitive enterprise and inference workloads that require proximity to end users, while large-scale AI training deployments gravitate toward regions with abundant, cost-effective renewable energy. ... Despite abundant renewables and favorable cool conditions, the Nordics have not scaled as quickly as anticipated. Thorpe reported steady but slower growth, citing municipal moratoriums – particularly in Sweden – and lower fiber density. Even so, AI training workloads are renewing interest in Norway and Finland. “The northern part of Norway is a good example,” Thorpe said, noting OpenAI’s planned Stargate facility powered entirely by hydroelectric energy. “They are able to achieve much lower PUE [power usage effectiveness] because of the cooler climate.” ... Meanwhile, stricter energy-efficiency requirements are complicating the planning process.


Top cyber threats to your AI systems and infrastructure

Multiple attack types against AI systems are arising. Some attacks, such as data poisoning, occur during training. Others, such as adversarial inputs, happen during inference. Still others, such as model theft, occur during deployment. ... Here, the attack goes after the model itself, seeking to produce inaccurate results by tampering with the model’s architecture or parameters. Some definitions of model poisoning models also include attacks where the model’s training data has been corrupted through data poisoning. ... “With prompt injection, you can change what the AI agent is supposed to do,” says Fabien Cros ... Model owners and operators use perturbed data to test models for resiliency, but hackers use it to disrupt. In an adversarial input attack, malicious actors feed deceptive data to a model with the goal of making the model output incorrect. ... Like other software systems, AI systems are built with a combination of components that can include open-source code, open-source models, third-party models, and various sources of data. Any security vulnerability in the components can show up in the AI systems. This makes AI systems vulnerable to supply chain attacks, where hackers can exploit vulnerabilities within the components to launch an attack. ... Also called model jailbreaking, attackers’ goal here is to get AI systems — primarily through engaging with LLMs — to disregard the guardrails that confine their actions and behavior, such as safeguards to prevent harmful, offensive, or unethical outputs.


The future of authentication in 2026: Insights from Yubico’s experts

As we look ahead to the future of authentication and identity, 2026 will be a pivotal year as the industry intensifies its focus on the standardization work required to make post-quantum cryptography (PQC) viable at scale as we near a post-quantum future. ... The proven, most effective solution to combat stolen and fake identities is the use of verifiable credentials – specifically, strong authentication combined with digital identity verification. The good news is countries around the world are taking action, with the EU moving forward with a bold plan over the next year: By late December 2026, each Member State must make at least one EUDI wallet available. ... AI's usefulness has rapidly improved over the years, and I anticipate that it will eventually help the general public in a meaningful way. In 2026, the cybersecurity industry should focus more efforts globally on accelerating the adoption of digital content transparency and authenticity standards to help everyone discern fact from fiction and continue the phishing-resistant MFA journey to minimize some of the impact of scams. ... In 2026, there will be a pivotal shift in the digital identity landscape as the industry moves beyond a narrow, consumer-centric focus to one focused on the enterprise. While the public conversation around digital identities has historically centered on consumer-facing scenarios like age verification, the coming year will bring a realisation that robust digital identity truly belongs in the heart of businesses.


7 changes to the CIO role in 2026

As AI transforms how people do their jobs, CIOs will be expected to step up and help lead the effort.
“A lot of the conversations are about implementing AI solutions, how to make solutions work, and how they add value,” says Ryan Downing. “But the reality is with the transformation AI is bringing into the workplace right now, there’s a fundamental change in how everyone will be working.” ... This year, the build or buy decisions for AI will have dramatically bigger impacts than they did before. In many cases, vendors can build AI systems better, quicker, and cheaper than a company can do it themselves. And if a better option comes along, switching is a lot easier than when you’ve built something internally from scratch. ... The key is to pick platforms that have the ability to scale, but are decoupled, he says, so enterprises can pivot quickly, but still get business value. “Right now, I’m prioritizing flexibility,” he says. Bret Greenstein, chief AI officer at management consulting firm West Monroe Partners, recommends CIOs identify aspects of AI that are stable, and those that change rapidly, and make their platform selections accordingly. ... “In the past, IT was one level away from the customer,” he says. “They enabled the technology to help business functions sell products and services. Now with AI, CIOs and IT build the products, because everything is enabled by technology. They go from the notion of being services-oriented to product-oriented.”


Agentic AI scaling requires new memory architecture

To avoid recomputing an entire conversation history for every new word generated, models store previous states in the KV cache. In agentic workflows, this cache acts as persistent memory across tools and sessions, growing linearly with sequence length. This creates a distinct data class. Unlike financial records or customer logs, KV cache is derived data; it is essential for immediate performance but does not require the heavy durability guarantees of enterprise file systems. General-purpose storage stacks, running on standard CPUs, expend energy on metadata management and replication that agentic workloads do not require. The current hierarchy, spanning from GPU HBM (G1) to shared storage (G4), is becoming inefficient ... The industry response involves inserting a purpose-built layer into this hierarchy. The ICMS platform establishes a “G3.5” tier—an Ethernet-attached flash layer designed explicitly for gigascale inference. This approach integrates storage directly into the compute pod. By utilising the NVIDIA BlueField-4 data processor, the platform offloads the management of this context data from the host CPU. The system provides petabytes of shared capacity per pod, boosting the scaling of agentic AI by allowing agents to retain massive amounts of history without occupying expensive HBM. The operational benefit is quantifiable in throughput and energy.

Daily Tech Digest - December 12, 2025


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



Escaping the transformation trap: Why we must build for continuous change, not reboots

Each new wave of innovation demands faster decisions, deeper integration and tighter alignment across silos. Yet, most organizations are still structured for linear, project-based change. As complexity compounds, the gap between what’s possible and what’s operationally sustainable continues to widen. The result is a growing adaptation gap — the widening distance between the speed of innovation and the enterprise’s capacity to absorb it. CIOs now sit at the fault line of this imbalance, confronting not only relentless technological disruption but also the limits of their organizations’ ability to evolve at the same pace. ... Technical debt has been rapidly amassing in three areas: accumulated, acquired, and emergent. The result destabilizes transformation efforts. ... Most modernization programs change the surface, not the supporting systems. New digital interfaces and analytics layers often sit atop legacy data logic and brittle integration models. Without rearchitecting the semantic and process foundations, the shared meaning behind data and decisions, enterprises modernize their appearance without improving their fitness. ... The new question is not, ‘How do we transform again?’ but ‘How do we build so we never need to?’ That requires architectures capable of sustaining and sharing meaning across every system and process, which technologists refer to as semantic interoperability.


The state of AI in 2026 – part 1

“The real race will be about purpose, measurable outcomes and return on investment. AI is no longer simply a technical challenge, it has become a business strategy,” said Zaccone. “However, this evolution comes with new risks. As agentic systems gain autonomy, securing the underlying AI infrastructure becomes critical. Standards are still emerging, but adopting strong security and governance practices early dramatically increases the likelihood of success. At the same time, AI is reshaping the risk landscape faster than regulation can adapt, which means it’s raising pressing questions around data sovereignty, compliance and access to AI-generated data across jurisdictions.” ... “Many teams now face practical limits around data quality, compute efficiency and responsible integration with existing systems. There is a clear gap between those who just wrap APIs around foundation models and those who actually optimise architectures and training pipelines. The next phase of AI is about reliability, interpretability and building systems that engineers can trust and improve over time,” Khan said. ... “To close the gap between the vision and reality of agentic AI over the next 12 months, enterprise agentic automation (EAA) will be essential. By blending dynamic AI with determinist guardrails and human-in-the-loop checkpoints, EAA empowers enterprises to automate complex, exception-heavy or cognitive work without losing control,” explained Freund.


Cybersecurity isn’t underfunded — It’s undermanaged

Of course, cybersecurity projects are often complex because they need to reach across corporate silos and geographies to deliver effective protection to the business. This is not natural in large firms, which are, almost by essence, territorial and political. But beyond that, the profile of CISOs is also a key dimension: Most are technologists by trade and background, and have spent the last decade firefighting incidents, incapable of building or delivering any kind of long-term narrative. They have not developed the type of management experience, political finesse or personal gravitas that they would require to be truly successful, now that the spotlight is firmly on them from the top of the firm. Many genuinely think that chronic under-investment in cybersecurity is the root cause of insufficient maturity levels, while it is in fact chronic execution failure linked to endemic business short-termism that is at the heart of the matter: All point to governance and cultural aspects that are the real root causes of the long-term stagnation of cybersecurity maturity levels in large firms. For the CISOs who have not integrated those cultural aspects and are almost always left out of those decisions, it breeds frustration; frustration breeds short tenures; short tenures aggravate the management and leadership mismatch: You cannot deliver much of genuine transformative impact in large firms on those timeframes.


Document databases – understanding your options

There are two decisions to take around databases today—what you choose to run, and how you choose to run it. The latter choice covers a range of different deployment options, from implementing your own instance of a technology on your own hardware and storage, through to picking a database as a service where all the infrastructure is abstracted away and you only see an API. In between, you can look at hosting your own instances in the cloud, where you manage the software while the cloud service provider runs the infrastructure, or adopt a managed service where you still decide on the design but everything else is done for you. ... The first option is to look at alternative approaches to running MongoDB itself. Alongside MongoDB-compatible APIs, you can choose to run different versions of MongoDB or alternatives to meet your document database needs. ... The second migration option is to use a service that is compatible with MongoDB’s API. For some workloads, being compatible with the API will be enough to move to another service with minimal to no impact. ... The third option is to use an alternative document database. In the world of open source, Apache CouchDB is another document database that works with JSON and can be used for projects. It is particularly useful where applications might run on mobile devices as well as cloud instances; mobile support is a feature that MongoDB has deprecated.


Why AI Fatigue Is Sending Customers Back to Humans

The pattern is familiar across industries: digital experiences that start strong, then steadily degrade as companies prioritize cost-cutting over satisfaction. In banking, this manifests in frustratingly specific ways: chatbots that loop through unhelpful responses, automated fraud alerts that lock accounts without a path to resolution, and phone trees that make reaching a human nearly impossible. ... The path forward for community banks and credit unions isn’t choosing between digital efficiency and human service or retreating to nostalgia for branch-based banking. It’s investing strategically in both. ... Geographic proximity enables genuine empathy that algorithms can’t replicate. Rajesh Patil, CEO at Digital Agents Service Organization (CUSO), offers an example: “When there’s a disaster in a community, an AI chatbot doesn’t know what happened. But a local branch employee knows and can say, ‘I understand. Let me help you.'” The most sophisticated community bank strategy uses technology to identify opportunities while humans deliver the insight. ... After decades of pursuing digital transformation, community banks and credit unions are discovering their competitive advantage was human all along. But the path forward isn’t nostalgia for branch-based banking, it’s strategic investment in both digital infrastructure and human capacity.


The Cloud Investment Paradox: Why More Spending Isn’t Delivering AI Results

There are three common gaps that stall AI progress, even after significant cloud spend. First is data architecture. Many organisations lift and shift legacy systems into the cloud without rethinking how data will flow across teams and tools. They end up with the same fragmentation problems, just in a new environment. Second is the skills gap. Research has found that 27% of organisations lack the internal expertise to harness AI’s potential. And it is not just data scientists. You need cloud architects who understand how to design environments specifically for AI workloads, not just generic compute. Third is data quality and accessibility. AI models cannot perform well without clean, consistent input. But too often, data governance is an afterthought. Only 1 in 5 organisations feel confident that their data is truly AI-ready. That is a foundational issue, not a fine-tuning one. ... Before investing in another AI pilot or data science hire, organisations should take a step back. Is the data ready? Are the pipelines in place? Do internal teams have what they need to turn compute into insight? This means prioritising data integration and governance before algorithms. It means investing in internal training and hiring with long-term capability in mind. And it means treating cloud and AI as part of the same strategy, not separate silos.


Beyond the login: Why “identity-first” security is leaking data and why “context-first” is the fix

The uncomfortable truth emerging from recent high-profile breaches is that identity-first security—when operating in isolation—is leaking data. Threat actors have evolved; they are no longer just trying to break down the door; they are cloning the keys. The reliance on static authentication events has created a dangerous blind spot. ... Standard facial recognition often looks for geometric matches—distance between eyes, shape of the nose. Deepfakes can replicate this perfectly, turning video verification into a vulnerability rather than a safeguard. To counter this, modern security must implement advanced “Liveness Detection”. It is no longer enough to match a face to a database; the system must analyse micro-expressions and texture to ensure the face belongs to a live human presence, not a digital puppet. Yet, even with these safeguards, betting the entire security posture solely on verifying who the user is, remains a risky strategy. ... To stop these leaks, security must move beyond the “Who” (Identity) and interrogate the “Where,” “What,” and “How” (Context). This requires a shift from static gates to Continuous Adaptive Trust. Context is not a single data point; it is a composite score derived from real-time telemetry. ... For technology leaders, this convergence is not just a technical upgrade; it is a strategic necessity for compliance. Frameworks like the Digital Personal Data Protection (DPDP) Act require organisations to implement “reasonable security safeguards”. 


Why Critical Infrastructure Needs Security-Forward Managed File Transfer Now

Today’s cyber attackers often use ordinary documents and files to breach organizations. Without strong security checks, it’s surprisingly easy for bad actors to cause major problems. Attacks exploit both common file formats and weaknesses in legacy operational technology (OT) environments. ... Modern managed file transfer (MFT) requires a layered security approach to effectively combat file-based threats and comply with best practices. This approach dictates that organizations must encrypt files at rest and in transit, employ strong hash checks, and use digital signing to validate the origin and integrity of files throughout their lifecycle. ... Many MFT tools incorporate multi-layered malware scanning. This works by scanning every file with multiple malware engines rather than relying on a single one, given that different engines detect different malware families and variants.​ Parallel multiscanning not only improves detection rates but also shortens the window for exploitation of zero‑day vulnerabilities and polymorphic malware. This helps to reduce the chance of false negatives before files enter sensitive networks.​ The scanning should be directly integrated into upload, download, and workflow steps so no file can move between zones without passing through a multi‑engine inspection pipeline.​ ... MFT workflows can automatically route files to a sandbox based on risk scores, file types, sender reputation, or country of origin. Then, files are only released upon passing behavioral checks.​ 


Fight AI Disinformation: A CISO Playbook for Working with Your C-Suite

Unlike misinformation or malinformation, which may be inaccurate or misleading but not necessarily harmful, disinformation is both false and designed specifically to damage organizations. It can be episodic, targeting individuals for immediate gain, such as tricking an employee into transferring funds via a deepfaked call. It can also be industrial, operating at scale to undermine brand reputation, manipulate stock prices, or probe organizational defenses over time. The attack surfaces are broad: internally, adversaries exploit corporate meeting solutions, email, and messaging platforms to bypass authentication and impersonate trusted individuals. ... Without clear ownership and cross-functional collaboration, efforts to counter disinformation are often disjointed and ineffectual. In some cases, organizations leave disinformation as an unmanaged risk, exposing themselves to episodic attacks on individuals and industrial campaigns targeting reputation and financial stability. Another common pitfall is failing to differentiate between types of information threats. CISOs should focus their resources on disinformation where intent to harm and lack of accuracy intersect, rather than attempting to police all forms of misinformation or malinformation. ... CISOs must lead the way in communicating the risks and fostering a culture of shared responsibility, engaging all employees in detection, reporting, and response. This includes developing internal tooling for monitoring and reporting, promoting transparency, and ensuring ongoing education about evolving threats.


Why AI Scaling Innovation Requires an Open Cloud Ecosystem

Developers and enterprises should have the flexibility to construct custom multi-cloud infrastructure that provides the appropriate specifications. Distributing workloads allows them to move faster on new projects without driving up infrastructure spend and overconsuming resources. It also enables them to prioritize in-country data residency for enhanced compliance and security. With an open ecosystem, developers and enterprises can stagger cloud-agnostic applications across a mosaic of public and private clouds to optimize hardware efficiency, maintain greater autonomy in data management and data security, and run applications seamlessly at the edge. This promotes innovation at all layers of the stack, from training to testing to processing, making it easier to deploy the best possible services and applications. An open ecosystem also reduces the branding and growth risks associated with hyperscaler dependence. Often, when a developer or enterprise runs their products exclusively on a single platform, they become less their own product and more an outgrowth of their hyperscaler cloud provider; instead of selling their app on its own, they sell the hyperscaler’s services. ... Supporting hyper-specific AI use cases often begets complex development demands: from hefty compute power, to multi-model frameworks, to strict data governance and pristine data quality. Even large enterprises don’t always have the resources in-house to account for these parameters.

Daily Tech Digest - December 11, 2025


Quote for the day:

"We become what we think about most of the time, and that's the strangest secret." -- Earl Nightingale



SEON Predicts Fraud’s Next Frontier: Entering the Age of Autonomous Attacks

AI has become a permanent part of the fraud landscape, but not in the way many expected. AI has transformed how we detect and prevent fraud, from adaptive risk scoring to real-time data enrichment, but full autonomy remains out of reach. Fraud detection still depends on human judgment, such as weighing intent, interpreting ambiguity, and understanding context that no model can fully replicate. Fraud prevention is a complex interplay of data, intent, and context, and that is where human reasoning continues to matter most. Analysts interpret ambiguity, weigh risk appetite, and understand social signals that no model can fully replicate. What AI can do is amplify that capability. ... The boundary between genuine and synthetic activity is blurring. Generative AI can now simulate human interaction with high accuracy, including realistic typing rhythms, believable navigation flows, and deepfake biometrics that replicate natural variance. The traditional approach of searching for the red flags no longer works when those flags can be easily fabricated. The next evolution in fraud detection will come from baselining legitimate human behaviour. By modelling how real users act over time, and looking at their rhythms, routines, and inconsistencies, we can identify the subtle deviations that synthetic agents struggle to mimic. It is the behavioural equivalent of knowing a familiar face in a crowd. Trust comes from recognition, not reaction. 


The Invisible Vault: Mastering Secrets Management in CI/CD Pipelines

In the high-speed world of modern software development, Continuous Integration and Continuous Deployment (CI/CD) pipelines are the engines of delivery. They automate the process of building, testing, and deploying code, allowing teams to ship faster and more reliably. But this automation introduces a critical challenge: How do you securely manage the "keys to the kingdom"—the API tokens, database passwords, encryption keys, and service account credentials that your applications and infrastructure require? ... A single misstep can expose your entire organization to a devastating data breach. Recent breaches in CI/CD platforms have shown how exposed organizations can be when secrets leak or pipelines are compromised. As pipelines scale, the complexity and risk grow with them. ... The cryptographic algorithms that currently secure nearly all digital communications (like RSA and Elliptic Curve Cryptography used in TLS/SSL) are vulnerable to being broken by a sufficiently powerful quantum computer. While such computers do not yet exist at scale, they represent a future threat that has immediate consequences due to "harvest now, decrypt later" attacks. ... Relevance to CI/CD Secrets Management: The primary risk is in the transport of secrets. The secure channel (TLS) established between your CI/CD runner and your Secrets Manager is the point of vulnerability. To future-proof your pipeline, you need to consider moving towards PQC-enabled protocols.


Experience Really Matters - But Now You're Fighting AI Hacks

Defenders traditionally rely on understanding the timing and ordering of events. The Anthropic incident shows that AI-driven activity occurs in extremely rapid cycles. Reconnaissance, exploit refinement and privilege escalation can occur through repeated attempts that adjust based on feedback from the environment. This creates a workflow that resembles iterative code generation rather than a series of discrete intrusion stages. Professionals must now account for an adversary that can alter its approach within seconds and can test multiple variations of the same technique without the delays associated with human effort. ... The AI attacker moved across cloud systems, identity structures, application layers and internal services. It interacted fluidly with whatever surface was available. Professionals who have worked primarily within a single domain may now need broader familiarity with adjacent layers of the stack because AI-driven activity does not limit itself to the boundaries of established specializations. ... The workforce shortage in cybersecurity will continue, but the qualifications for advancement are shifting. Organizations will look for professionals who understand both the capabilities and the limitations of AI-driven offense and defense. Those who can read an AI-generated artifact, refine an automated detection workflow, or construct an updated threat model will be positioned for leadership roles.


Is vibe coding the new gateway to technical debt?

The big idea in AI-driven development is that now we can just build applications by describing them in plain English. The funny thing is, describing what an application does is one of the hardest parts of software development; it’s called requirements gathering. ... But now we are riding a vibe. A vibe, in this case, is an unwritten requirement. It is always changing—and with AI, we can keep manifesting these whims at a good clip. But while we are projecting our intentions into code that we don’t see, we are producing hidden effects that add up to masses of technical debt. Eventually, it will all come back to bite us. ... Sure, you can try using AI to fix the things that are breaking, but have you tried it? Have you ever been stuck with an AI assistant confidently running you and your code around in circles? Even with something like Gemini CLI and DevTools integration (where the AI has access to the server and client-side outputs) it can so easily descend into a maddening cycle. In the end, you are mocked by your own unwillingness to roll up your sleeves and do some work. ... If I had to choose one thing that is the most compelling about AI-coding, it would be the ability to quickly scale from nothing. The moment when I get a whole, functioning something based on not much more than an idea I described? That’s a real thrill. Weirdly, AI also makes me feel less alone at times; like there is another voice in the room.


How to Be a Great Data Steward: Responsibilities and Best Practices

Data is often described as “a critical organizational asset,” but without proper stewardship, it can become a liability rather than an asset. Poor data management leads to inaccurate reporting, compliance violations, and reputational damage. For example, a financial institution that fails to maintain accurate customer records risks incurring regulatory penalties and causing customer dissatisfaction. ... Effective data stewardship is guided by several foundational principles: accountability, transparency, integrity, security, and ethical use. These principles ensure that data remains accurate, secure, and ethically managed across its lifecycle. ... Data stewards can be categorized into several types: business data stewards, technical data stewards, domain or lead data stewards, and operational data stewards. Each plays a unique role in maintaining data quality and compliance in conjunction with other data management professionals, technical teams, and business stakeholders. ... Data stewardship thrives on clarity. Every data steward should have well-defined responsibilities and authority levels, and each data stewardship team should have clear boundaries and expectations identified. This includes specifying who manages which datasets, who ensures compliance, and who handles data quality issues. Clear role definitions prevent duplication of effort and ensure accountability across the organization.


Time for CIOs to ratify an IT constitution

IT governance is simultaneously a massive value multiplier and a must-immediately-take-a-nap-boring topic for executives. For busy moderns, governance is as intellectually palatable as the stale cabbage on the table René Descartes once doubted. How do CIOs get key stakeholders to care passionately and appropriately about how IT decisions are made? ... Everyone agrees that one can’t have a totally centralized, my-way-or-the-highway dictatorship or a totally decentralized you-all-do-whatever-you-want, live-in-a-yurt digital commune. Has the stakeholder base become too numerous, too culturally disparate, and too attitudinally centrifugal to be governed at all? ... Has IT governance sunk to such a state of disrepair that a total rethink is necessary? I asked 30 CIOs and thought leaders what they thought about the current state of IT governance and possible paths forward. The CFO for IT at a state college in the northeast argued that if the CEO, the board of directors, and the CIO were “doing their job, a constitution would not be necessary.” The CIO at a midsize, mid-Florida city argued that writing an effective IT constitution “would be like pushing water up a wall.” ... CIOs need to have a conversation regarding IT rights, privileges, duties, and responsibilities. Are they willing to do so? ... It appears that IT governance is not a hill that CIOs are willing to expend political capital on. 


Flash storage prices are surging – why auto-tiering is now essential

Across industries and use cases, a consistent pattern emerges. The majority of data becomes cold shortly after it is created. It is written once, accessed briefly, then retained for long periods without meaningful activity. Cold data does not require low latency, high IOPS, expensive endurance ratings or premium, power-intensive performance tiers. It only needs to be stored reliably at the lowest reasonable cost. Yet during the years when flash was only marginally more expensive than HDD, many organisations placed cold data on flash systems simply because the price difference felt manageable. With today’s economics, that model can no longer scale. ... The rise in ransomware attacks also helped drive flash adoption. Organisations sought faster backups, quicker restores, and higher snapshot retention. Flash delivered these benefits, but the economics are breaking under current pricing conditions. Today, the cost of flash-based backup appliances is rising, long-term retention on flash is becoming unsustainable, and maintaining deep histories on premium media no longer aligns with budget expectations. ... The current flash pricing crisis is more than a temporary spike. It signals a long-term shift in storage economics driven by accelerating AI demand, constrained supply chains, and global data growth. The all-flash mindset of the past decade is now colliding with financial realities that organisations can no longer ignore. Cold data should not be placed on expensive media. 


AI, sustainability and talent gaps reshape industrial growth

A new study by GlobalLogic, a Hitachi Group company, in partnership with HFS Research, reveals a widening divide between industrial enterprises’ ambitions and their real-world readiness for AI, sustainability, and workforce transformation. Despite strong executive push towards modernization, skills shortages, legacy systems, and misaligned priorities continue to stall progress across key industrial segments. ... The findings lay bare the scale of transition ahead: while industries recognize AI and sustainability as foundational for future competitiveness, a lack of talent and weak integration strategies are slowing measurable impact. “Industrial leaders see AI, sustainability, and talent as top priorities, yet struggle to convert these ambitions into tangible results,” said Srini Shankar, President and CEO at GlobalLogic. ... Although operational cost reduction is the top priority today, the study finds that within two years, AI adoption and operational optimization will dominate executive focus. The industrial sector is preparing for a shift from incremental improvements to deep automation and intelligence-led models. ... “Enterprises need to embed sustainability, talent, and technology transitions into both strategy and day-to-day operations,” said Josh Matthews, HFS Research. “Clear outcomes and messaging are essential to show current and future workforces that industrial organizations are shaping — not chasing — the sustainable, tech-driven future.”


When ransomware strikes, who takes the lead -- the CIO or CISO?

"[CIOs and CISOs] will probably have different priorities for when they want to do things; the CIO is going to be more concerned [about the] business side of keeping systems operational, whereas the CISO [wants to know] where is this critical data? Is it being exfiltrated? Having a good incident response plan, planning that stuff out in advance [is necessary so both parties know] what steps they're supposed to take. "The best default to contain the attack is to pull internet connectivity. You don't want to restart a system [or] shut it down, because you can lose forensic evidence. That way, if they are exfiltrating any data, that access stops, so you can begin triaging how they got in and patch that hole up. ... the first three steps come down to confirm, contain and anchor. We want to confirm that blast radius, not hypothesize or theorize what it could be, but what is it really? You'd be surprised at how many teams burn their most valuable hour debating whether it's really ransomware. "Second, contain first, communicate second. I think there's a natural [tendency for] humans to send an all-hands email out, call an emergency meeting and even notify customers. What matters most is to triage and stop the bleeding, isolate those compromised systems and cripple the bad actor's lateral movement. ... "[The best way to contain a ransomware attack will be different for each organization depending on their architectures, controls and technology, but in general, isolate as completely as possible. 


LLM vulnerability patching skills remain limited

Because the models rely on patterns they have learned, a shift in structure can break those patterns. The model may still spot something that looks like the original flaw, but the fix it proposes may no longer land in the right place. That is why a patch that looks reasonable can still fail the exploit test. The weakness remains reachable because the model addressed only part of the issue or chose the wrong line to modify. Another pattern surfaced. When a fix for an artificial variant did appear, it often came from only one model. Others failed on the same case. This shows that each artificial variant pushed the systems in different directions, and only one model at a time managed to guess a working repair. The lack of agreement across models signals that these variants exposed gaps in the patterns the systems depend on. ... OpenAI and Meta models landed behind that mark but contributed steady fixes in several scenarios. The spread shows that gains do not come from one vendor alone. The study also checked overlap. Authentic issues showed substantial agreement between models, while artificial issues showed far less. Only two issues across the entire set were patched by one model and not by any other. This suggests that combining several models adds limited coverage. ... Researchers plan to extend this work in several ways. One direction involves combining output from different LLMs or from repeated runs of the same model, giving the patching process a chance to compare options before settling on one.