Daily Tech Digest - January 25, 2026


Quote for the day:

"Life is 10% what happens to me and 90% of how I react to it." -- Charles Swindoll



Agentic AI exposes what we’re doing wrong

What needs to change is the level of precision and adaptability in network controls. You need networking that supports fine-grained segmentation, short-lived connectivity, and policies that can be continuously evaluated rather than set once and forgotten. You also need to treat east-west traffic visibility as a core requirement because agents will generate many internal calls that look legitimate unless you understand intent, identity, and context. ... When the user is an autonomous agent, control relies solely on identity: what the agent is, its permitted actions, what it can impersonate, and what it can delegate. Network location and static IP-based trust weaken when actions are initiated by software that can run anywhere, scale instantly, and change execution paths. This is where many enterprises will stumble.  ... The old finops playbook of tagging, showback, and monthly optimization is not enough on its own. You need near-real-time cost visibility and automated guardrails that stop waste as it happens, because “later” can mean “after the budget is gone.” Put differently, the unit economics of agentic systems must be designed, measured, and controlled like any other production system, ideally more aggressively because the feedback loop is faster. ... The industry’s favorite myth is that architecture slows innovation. In reality, architecture prevents innovation from turning into entropy. Agentic AI accelerates entropy by generating more actions, integrations, permissions, data movement, and operational variability than human-driven systems typically do.


‘Cute’ and ‘Criminal’: AI Perception, Human Bias, and Emotional Intelligence

Can you build artificial intelligence (AI) without emotional intelligence (EI)? Should you? What do we mean when we talk about “humans in the loop”? Are we asking the right questions about how humans design and govern “thinking” machines? One of the immediate problems we face with generative AI is that people increasingly rely on them for big decisions. I won’t call all of these ethical decisions, but in some cases they’re consequential decisions. And many users forget that these systems are trained on data that carry all kinds of inherited biases. When we talk about AI bias, it isn’t always abstract. It shows up in very literal assumptions the models make when they are asked to generate images or ideas. ... That question is really the beginning of understanding how these systems work. They are pulling from enormous bodies of unlabeled or inconsistently labeled data and then inferring patterns. We often forget that the inferences are statistical, not conceptual. To the model, “doctor” aligns with “male” because that’s the pattern the dataset reinforced. ... I didn’t tell the system, “diverse audience,” then all the children it generated fell into the same narrow “cute child” category. It’s not that the AI systems are racist or sexist. They simply don’t have self-awareness. They’re reflecting the dominant patterns in the datasets they learned from. But reflection without critique becomes reinforcement, and reinforcement becomes norm.


AI is quietly poisoning itself and pushing models toward collapse - but there's a cure

According to tech analyst Gartner, AI data is rapidly becoming a classic Garbage In/Garbage Out (GIGO) problem for users. That's because organizations' AI systems and large language models (LLMs) are flooded with unverified, AI‑generated content that cannot be trusted. ... You know this better as AI slop. While annoying to you and me, it's deadly to AI because it poisons the LLMs with fake data. The result is what's called in AI circles "Model Collapse." AI company Aquant defined this trend: "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality." ... The analyst argued that enterprises can no longer assume data is human‑generated or trustworthy by default, and must instead authenticate, verify, and track data lineage to protect business and financial outcomes. Ever try to authenticate and verify data from AI? It's not easy. It can be done, but AI literacy isn't a common skill. ... This situation means that flawed inputs can cascade through automated workflows and decision systems, producing worse results. Yes, that's right, if you think AI result bias, hallucinations, and simple factual errors are bad today, wait until tomorrow. ... Gartner suggested many companies will need stronger mechanisms to authenticate data sources, verify quality, tag AI‑generated content, and continuously manage metadata so they know what their systems are actually consuming.


4 Realities of AI Governance

AI has not replaced traditional security work; it has layered new obligations on top of it. We still have to protect our data and maintain sovereign assurance through independent audit reports, whether that’s SOC, PCI, ISO, or other standards. Still, we must today also guide our own teams and vendors on the use of powerful AI tools. That’s where accountability begins: with the human or process that touches the data. When the rules are clear, people move faster and safer; when directives are fuzzy, everything downstream is too—so we keep policy short, plain, and visible. ... Unless the contract says otherwise, assume prompts, outputs, or telemetry may be retained for “service improvement.” Fine-print phrases like “continuous improvement” often mean that inputs, outputs, or telemetry can be retained or used to tune systems unless you opt out. To keep reviews consistent, leverage resources like the NIST AI Risk Management Framework. It provides practical checklists for transparency, accountability, and monitoring. Remember the AI supply chain: your vendor depends on model providers, plugins, and open-source components; your risk includes their dependencies, so cover these in your TPRM process. ... Boundaries are the difference between safe speed and reckless speed. Start by defining a short set of data types that must never be pasted into external tools: regulated PII, confidential customer data, unreleased financials, source code, or merger and acquisition materials. Map the rest into simple classes-public, internal, sensitive-and tie each class to approved tools and use cases.


Your Cache is Hiding a Bad Architecture

Most engineers treat caching as a performance optimisation. They see a complex SQL query involving four joins taking 2 seconds to execute. Instead of analysing the execution plan or restructuring the schema, they wrap the call in a redis.get() block. ... By relying on the cache to mask inefficient database interactions, you haven’t fixed the bottleneck; you have simply hidden it behind a volatile memory store. You have turned a “nice-to-have” performance layer into a Critical Infrastructure Dependency. The moment that the cache key expires, or the Redis node evicts the key to free up memory, the application is forced to confront the reality of that 2-second query. And usually, it doesn’t confront it alone. It confronts it with 500 concurrent users who were all waiting for that key. ... Caching is not a strategy; it is a tactic. It is a powerful optimisation for systems that are already healthy, but it is a disastrous life-support system for those that are not. If you take nothing else from this, remember the litmus test: System stability should not depend on volatile memory. Go back to your codebase. Turn off Redis in your staging environment. Run your load tests. If your response times go up, you have a performance problem. If your error rates go up, you have an architectural problem.


UK bill accelerates shift to offensive cyber security

The Cyber Security and Resilience (Network and Information Systems) Bill entered Parliament in late 2025 and is expected to move through the legislative process during 2026. The government has positioned the bill as a major update to the UK's cyber framework for essential services and digital service providers. ... Poyser argued that many companies still lean heavily on defensive tools without validating how those controls perform under attack conditions. "Cybercriminals and state-backed threat actors are acting faster, more aggressively, and with far greater innovation-especially through the use of artificial intelligence-while too many businesses continue to rely on traditional defensive methods. This widening gap must be closed urgently," said Poyser. He also linked the coming UK legislative changes to a push for more proactive security validation. ... The company said this attacker-style approach changes how risk gets measured and prioritised. It said corporate security teams struggle to maintain an accurate picture of exposure through passive controls and periodic checks. "It is increasingly unrealistic for corporate security teams to maintain an accurate understanding of their true risk exposure using only traditional, passive methods," said Keith Poyser. "Threat actors do not wait for annual audits or one-off checks. Unless organisations test their systems in a way that reflects how real attackers operate, they will continue to be caught off-guard," said Poyser.


The new CDIO stack: Tech, talent and storytelling

The first layer is the one everyone ‘expects’. We built strong platforms: cloud infrastructure that can flex with the business, data platforms that bring together information from plants, systems and markets, analytics and AI capabilities that sit on top of that data, and a solid cyber posture to protect all of it. ... The second layer was not about machines at all. It was about people, about changing the talent mix so that digital is no longer “their” thing — it becomes “our” thing. We realised that if we kept thinking in terms of “IT people” and “business people”, we would always be negotiating across a wall. ... The third layer is the one that surprised even me. We noticed a pattern. Even when we had good platforms and strong talent, some initiatives would start with a bang and fizzle out. The technology worked. The pilot results were good. But momentum died. When we dug deeper, we realised the issue was not in the code. It was in the story. The operators on the shop floor, the sales teams, the plant heads and the board were all hearing slightly different stories about “digital”. ... Yes, I am responsible for technology. If the platforms are not robust, I have failed at the most basic level. Yes, I am responsible for talent. If we don’t have the right mix of skills — product, data, architecture, change — we cannot deliver. But I am also responsible for the narrative. ... For me, the real maturity of a digital organization shows when these three layers are aligned.


What Software Developers Need to Know About Secure Coding and AI Red Flags

The uptick in adoption of AI tools within the developer community aligns with growing expectations. Developers are now expected to work with greater efficiency to meet deadlines more quickly, all while delivering high-quality code. Developers might find AI assistants to be beneficial as they are immune to human-based tendencies like fatigue and biases, which can boost efficiency. But sacrificing safety for speed is unacceptable, as AI tools bring inherent risks of compromise. ... AI tools are not safe for enterprise use unless the code output is reviewed and implemented by a security-proficient human. 30% of security experts admit that they don't trust the accuracy of code generated by AI itself. That's why security leaders must prioritize the education and upskilling of developer teams, to ensure they have the necessary skills and capabilities to mitigate AI-assisted code vulnerabilities as early as possible. This will lead to the cultivation of a "security first" team culture and safer AI use. ... In addition, agentic AI introduces new or "agentic variations" of existing threats, like memory poisoning, remote code execution (RCE) and code attacks. It can harm code via logic errors, which cause the product to "run" correctly but act incorrectly; style inconsistencies, which result in patterns that do not align with the current, required structure; and lenient permissions, which act correctly but lack the authorization context to determine if an end user is allowed to perform a particular action.


Building a Self-Healing Data Pipeline That Fixes Its Own Python Errors

The core concept of this is relatively simple. Most data pipelines are fragile because they assume the world is perfect, and when the input data changes even slightly, they fail. Instead of accepting that crash, I designed my script to catch the exception, capture the “crime scene evidence”, which is basically the traceback and the first few lines of the file, and then pass it down to an LLM. ... The primary challenge with using Large Language Models for code generation is their tendency to hallucinate. From my experience, if you ask for a simple parameter, you often receive a paragraph of conversational text in return. To stop that, I leveraged structured outputs via Pydantic and OpenAI’s API. This forces the model to complete a strict form, acting as a filter between the messy AI reasoning and our clean Python code. ... Getting the prompt right took some trial and error. And that’s because initially, I only provided the error message, which forced the model to guess blindly at the problem. I quickly realized that to correctly identify issues like delimiter mismatches, the model needed to actually “see” a sample of the raw data. Now here is the big catch. You cannot actually read the whole file. If you try to pass a 2GB CSV into the prompt, you’ll blow up your context window and apparently your wallet. ... First, remember that every time your pipeline breaks, you are making an API call.


‘Complexity is where cyber risk tends to grow’

Last month, the Information Systems Audit and Control Association (ISACA) announced that it had been appointed to lead the global credentialing programme for the US Department of War’s (DoW) Cybersecurity Maturity Model Certification (CMMC). The CMMC, according to ISACA’s chief global strategy officer Chris Dimitriadis, is “designed to protect sensitive information across the defence industrial base and its supply chain”. ... “Transatlantic operations almost always increase complexity, and complexity is where cyber risk tends to grow,” he says. “The first major issue is supply chain exposure. Attackers rarely go after the strongest link, they look for the most vulnerable one. “In global ecosystems, that can be a smaller supplier, a service provider or a subcontractor.” The second issue, he says, is the “nature” of the data and the systems that are involved. “When defence-related information, controlled technical data, or sensitive operational systems are in play, the impact of compromise is simply much higher. That requires stronger access controls, better identity governance, and more disciplined incident response.” The third and final issue that Dimitriadis highlights is “multi-jurisdiction reality”. He explains that companies need to navigate different requirements, obligations and reporting expectations across regions, adding that if governance and security operations aren’t aligned, “you create gaps, and those gaps are exactly what threat actors exploit”.

Daily Tech Digest - January 24, 2026


Quote for the day:

"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone



When a new chief digital officer arrives, what does that mean for the CIO?

One reason the CDO can unsettle CIOs is that the title has never had a consistent meaning. Isaac Sacolick, president and founder of StarCIO, said organizations typically create the role for one of two reasons. "Some organizations split off a CDO role because the CIO is overly focused on infrastructure and operations, and the business's customer and employee experiences, AI and data initiatives, and other innovations aren't meeting expectations," Sacolick said. "In other organizations, the CDO is a C-level title for the head of product management and UX/design functions, and reports to the CIO." Those two models lead to very different outcomes. In the first, the CDO is positioned as a corrective measure; in the second, the role is an extension of the CIO's broader operating model. Without clarity on which model is being pursued, confusion tends to follow. ... Across the experts, there was strong agreement on one point: The CIO remains central to the enterprise digital operating model, even as new roles emerge. "CIOs need to own the digital operating model and evolve it for the AI era," Sacolick said, noting that this increasingly involves "product-centric, agile, multi-disciplinary team organizational models." Ratcliffe echoed that sentiment, emphasizing accountability and trust. "The CIO should be the single point of ownership with the deep expertise feeding into it so there is consistency, business acumen and trust built within the technology function," he said.


Responsible AI moves from principle to practice, but data and regulatory gaps persist: Nasscom

The data shows a strong correlation between AI maturity and responsible practices. Nearly 60% of companies that say they are confident about scaling AI responsibly already have mature RAI frameworks in place. Large enterprises are leading this transition, with 46% reporting mature practices. Startups and SMEs trail behind at 16% and 20% respectively, but Nasscom sees this as ecosystem-wide momentum rather than a gap, given the growing willingness among smaller firms to learn, comply, and invest. ... Workforce enablement has become a central pillar of this transition. Nearly nine out of ten organisations surveyed are investing in sensitisation and training around Responsible AI. Companies report the highest confidence in meeting data protection obligations—reflecting relatively mature privacy frameworks—but monitoring-related compliance continues to be a concern. Accountability for AI governance still sits largely at the top. ... As AI systems become more autonomous, Responsible AI is increasingly seen as the deciding factor for whether organisations can scale with confidence. Nearly half of mature organisations believe their current frameworks are prepared to handle emerging technologies such as agentic AI. At the same time, industry experts caution that most existing frameworks will need substantial updates to address new categories of risk introduced by more autonomous systems. The report concludes that sustained investment in skills, governance mechanisms, high-quality data, and continuous monitoring will be essential.


AI-induced cultural stagnation is no longer speculation − it’s already happening

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt. ... For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation. Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions. ... The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions. This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration. 



Europe votes to tackle deep dependence on US tech in sovereignty drive

The depth of European reliance on foreign technology providers varies across sectors but remains substantial throughout the stack. In cloud infrastructure alone, Amazon, Microsoft, and Google command 70% of the European market, while local providers including SAP, Deutsche Telekom, and OVHcloud collectively hold just 15%. ... “Recent geopolitical tensions show that the issue of Europe’s digital sovereignty is of the utmost importance,” MichaÅ‚ Kobosko, the Renew Europe MEP who negotiated the report text, said in a statement. “If we do not act now to reduce Europe’s technological dependence on foreign actors, we run the risk of becoming a digital colony.” ... “Due to geopolitical tensions, the driver has shifted to reducing foreign digital dependency across the entire technology stack. European CIOs are now tasked with redesigning their approach to semiconductors, cloud, software, and AI, upending two decades of established strategy. It’s not going to be easy, it’s not going to be cheap, and it’s going to span multiple generations of CIOs.” When asked whether European enterprises will see viable sovereign alternatives across core technology areas, Henein said: The answer is yes, but the time horizon is potentially more than a decade. Europe has been supporting US technology providers through licensing agreements for the better part of the last two decades. ... A key question is whether the report’s proposed preferential procurement policies can actually change market realities, given the 


One-time SMS links that never expire can expose personal data for years

One of the most significant findings involved how long these links remained active. All 701 confirmed URLs still worked when the researchers accessed them, often long after the original message was sent. More than half of the exposed links were between one and two years old. About 46% were older than two years. Some dated back to 2019. Public SMS gateways rarely retain messages for that long, which suggests that the actual lifetime of many links may extend even further. The risk starts as soon as a private link is exposed, but it grows with time. The longer a link stays active, the more chances there are for abuse through logs, forwarding, compromised devices, message interception, phone number recycling, or third-party access. ... In many services, the link carried a token passed to backend APIs. Some pages rendered data server side, while others fetched information after load. Only five services placed personal data directly inside the URL itself, though access results were similar once the link was opened. This design assumes the link remains private. According to Danish, product pressure plays a central role in keeping this pattern widespread. ... In one case, an order tracking page displayed an address, while API responses included phone numbers, geolocation data, and driver details. In another, a loan service returned bank routing numbers and Social Security numbers that were only visible in network logs. This data became reachable as soon as the link was opened, even before the page finished loading. 


How enterprise architecture and start-up thinking drive strategic success

Strategy is now judged less by the quality of vision decks and more by how quickly enterprises can test, learn and scale what works and is valuable. To beat the heat, enterprises increasingly combine the discipline of enterprise architecture with the speed and adaptability associated with a start-up mindset. ... Modern enterprise architecture is less about cataloging systems and more about shaping how an enterprise senses opportunities, mobilizes resources and transforms at pace. In a high-performing enterprise, it acts as a bridge between strategy and execution in three concrete ways, i.e., alignment and clarity, transparency and risk management and decision support and adaptive governance. ... Start-ups and scale-ups operate under uncertainty, but they thrive by learning in short cycles, minimizing waste and scaling only what demonstrates traction. When large enterprises infuse enterprise architecture with similar principles, the function becomes a multiplier for speed rather than a constraint. ... Cross-functional innovation and flexible governance complete the picture. In many enterprises, architects now embed directly in domain or platform teams, joining strategic backlog refinement, incident reviews and design sessions as peers. In a large healthcare network, for instance, enterprise architecture practitioners joined clinical, operations and analytics teams to co-design a data platform that could support both operational reporting and AI-driven decision support.


From Conflict To Collaboration: How Tension Can Strengthen Your Team

Letting tensions simmer is one of the most common leadership mistakes. The longer a disagreement sits in the corner, the more toxic it becomes. ... Teams function better when they normalize honest conversation before things go sideways. A simple practice—opening meetings with "wins and worries"—creates a habit of surfacing concerns early. Netflix cofounder Reed Hastings echoes this principle: "Only say about someone what you will say to their face." It’s a powerful expectation. Candor reduces gossip, eliminates guesswork and gives leaders clarity long before emotions get out of hand. ... When conflict arises, people don’t immediately need solutions. What they need is to feel heard. It’s vital to fully understand their concerns so there is no ambiguity. Repeat your understanding of their position before giving your input. It’s remarkable how much progress can be made when people feel genuinely heard. ... Compromise has an unfair reputation in business culture, as if giving an inch signals defeat. In practice, it’s a recognition that multiple perspectives may hold merit. Good leaders invite both sides to walk through their rival viewpoints together. When people better understand the context behind each position, they’re far more willing to find common ground that moves the team forward. ... Many conflicts resurface not because the solution was wrong, but because leaders assumed the first conversation fixed everything. 


Six tips to gain control over your cloud spending

The first step any organization should take before shifting a workload to the cloud is performing proper due diligence on ROI. It isn’t always the case that moving workloads to the cloud will translate into financial savings. Many variables should be considered when calculating ROI, including current infrastructure, licensing and hiring. ... A formal cloud governance framework establishes rules, policies, and processes that formalize how cloud resources will be accessed, used, and retired. Accurately matching cloud resources to workload demands improves resource utilization and minimizes waste. ... FinOps, short for financial operations, is a management discipline that involves collaboration between finance, operations and development teams to manage cloud spending. By implementing tools and processes for cost tracking, budgeting, and forecasting, businesses can gain insights into their cloud expenses and identify areas for optimization. ... Providers offer a variety of discounts that can significantly reduce cloud costs. For example, reserved instance pricing models offer discounts to customers who reserve cloud resources over a fixed period. Some providers offer tiered pricing models in which the cost per unit decreases as you consume more resources. ... You may find that moving some workloads to the cloud offers no significant performance advantages. Repatriating some applications, data and workloads back to on-premises infrastructure can often improve performance while reducing cloud spending.


These 4 big technology bets will reshape the global economy in 2026

The impact of disruptive technologies will have a material impact on real GDP growth. ARK suggested that capital investment alone, catalyzed by disruptive innovation platforms, could add 1.9% to annualized real GDP growth this decade. Each innovation platform, AI, public blockchains, robotics, energy storage, and multiomics, should provide a structural boost to global growth. ... According to ARK research, hyperscalers are expected to spend more than $500 billion on capital expenditures (Capex) in 2026, nearly four times the $135 billion spent in 2021, the year before the launch of ChatGPT in 2022. ... ARK forecasted that AI agents could facilitate more than $8 trillion in online consumption by 2030. ARK noted that as consumers delegate more decisions to intelligent systems, AI agents should capture an increasing share of digital transactions, from 2% of online spend in 2025 to around 25% by 2030 ... AI agents are becoming more productive. ARK found that advances in reasoning capability, tool use, and extended context are driving an exponential increase in the capability of AI agents. The duration of tasks these agents can complete reliably increased 5 times, from six minutes to 31 minutes, in 2025. ... ARK suggested robots are a growing part of the labor force and took a historical look at productivity and labor hours. As productivity increased, each hour of labor became more valuable, enabling increased output with fewer hours, as living standards continued to rise


Half of agentic AI projects are still stuck at the pilot stage

The main barriers to full implementation, respondents said, are concerns with security, privacy, or compliance, cited by 52%, followed by technical challenges to managing agents at scale, at 51%. “Organizations are not slowing adoption because they question the value of AI, but because scaling autonomous systems safely requires confidence that those systems will behave reliably and as intended in real-world conditions,” said Alois Reitbauer, chief technology strategist at Dynatrace. Seven-in-ten agentic AI–powered decisions are still verified by humans, and 87% of organizations are actively building or deploying agents that require human supervision. ... A recurring pain point for enterprises tinkering with agentic AI tools lies in observability, according to Dynatrace. Observability of these autonomous systems is needed across every stage of the life cycle, from development and implementation through to operationalization. Observability is most used in implementation, at 69%, followed by operationalization at 57% and development at 54%. “Observability is a vital component of a successful agentic AI strategy. As organizations push toward greater autonomy, they need real-time visibility into how AI agents behave, interact, and make decisions,” Reitbauer said. “Observability not only helps teams understand performance and outcomes, but it provides the transparency and confidence required to scale agentic AI responsibly and with appropriate oversight.”

Daily Tech Digest - January 23, 2026


Quote for the day:

"Strong convictions precede great actions." -- James Freeman Clarke



90% of companies are woefully unprepared for quantum security threats

Companies shouldn't wait, Bain warned, pointing to rapid progress made by IBM, Google, and other industry leaders on this front. "At a certain threshold, quantum computing will be able to easily and quickly break asymmetric cryptography protocols such as Rivest-Shamir-Adelman (RSA), Diffie-Hellman (DH), and elliptic-curve cryptography (ECC) and reduce the time required, weakening symmetric cryptography such as advanced encryption standard (AES) and hashing functions," ... The highest impact will be on secure keys and tokens, digital certificates, authentication protocols, data encrypted at rest, and even network security and identity access management (IAM) tools. Essentially, anything currently relying on encryption. Beyond that, quantum computing could supercharge malware and make it easier to identify and weaponize "zero day" flaws, Bain warned. Another risk highlighted by security experts is "steal now, crack later" techniques, whereby threat actors harvest data now to decrypt later.  ... Companies need a board-led – and funded – roadmap to consider post-quantum risks across their business decision making, ensuring quantum resilience across their own suppliers, existing technology, and even their products. But so far, the Bain survey revealed only 12% of companies are considering quantum readiness as a key factor in procurement and risk assessments.


The New Rules of Work: What a global HR leader reveals about modern talent

The impact of AI on the workforce is a subject Sonia has thought deeply about, especially as it relates to entry-level talent. “There’s always been a question about repetitive engineering tasks—whether these should be done by engineers or by diploma holders. Now, with AI in the picture, many of these tasks will be automated,” she says. Rather than seeing this as a threat, Kutty believes it frees up human talent to focus on innovation and problem-solving. “Our true value at Quest Global comes from leveraging innovation to solve the toughest engineering problems. AI will allow us to do more of this meaningful work.” ... While the company offers AI-based courses and certifications, Kutty emphasises the importance of fostering a mindset of adaptability and systems thinking. “We call it nurturing ‘polymath engineers’—professionals who can think broadly, adapt to new challenges, and learn continuously,” she says. ... As the engineering and R&D sector prepares for rapid growth, Kutty identifies leadership development as her biggest challenge—and her greatest responsibility. “We need strong leaders who understand this industry and are ready to step up when the time comes. Planning for leadership succession keeps me up at night. It’s critical for our continued success.” On the other hand, client expectations have evolved alongside technological advances. “In the past, clients would tell us exactly what they wanted. Now, they expect us to tell them what’s possible with AI and technology. They see us as partners in innovation, not just service providers,” Kutty observes.


Work-from-office mandate? Expect top talent turnover, culture rot

There is value in cross-functional teams working together in person, says Lawrence Wolfe, CTO at marketing firm Converge. “When teams meet for architecture sessions, design sprints, or incident response, the pace of progress, as well as the level of clarity, may increase simply because being in-person caters to the way most people in the business interact,” he says. However, there are potential downsides for IT leaders, with strict work-from-office policies making it more difficult to attract and retain top IT talent. ... Despite possible resistance, it makes sense for some IT jobs to be tied to an office, says Lena McDearmid, founder and CEO of culture and leadership advisory firm Wryver. Some IT roles, including device provisioning, network operations, and conference room IT support, are better done in person, she notes. She sees some other benefits in specific situations. “In-person work is genuinely valuable for onboarding and mentoring early-career technologists, especially when learning how the organization actually operates, not just how the codebase works,” McDearmid says. “It’s also powerful when teams need to think together in high-bandwidth ways: whiteboards, war rooms, architecture reviews, incident response, or when solving messy, cross-functional problems.” ... IT leaders enforcing in-person work mandates can also focus on making the workplace a real place to collaborate, she adds. CIOs can align office space, meeting schedules, and in-office days so they reinforce the goals of collaboration and knowledge sharing, Wettemann adds.


Rethinking IT leadership to unlock the agility of ‘teamship’

Rather than waiting for the leader to set the pace, the best teams coach one another, challenge one another, co-elevate one another, and move faster, because they and their leaders have built cultures where candor is a shared responsibility. For CIOs navigating the messy middle of AI, modernization, and talent transformation, this shift from leadership to what Ferrazzi calls “teamship” may be the most important upgrade of all. ... The No. 1 shift is to move from leadership to teamship. That means stop thinking of leadership as a hub and spoke. Don’t think aboutwhat you need to give feedback on, how you need to hold people accountable, how you need to do this or that. Instead, think about, how do you get your team to step up and meet each other, to give each other feedback, to hold each other’s energy up. Get out of the center and expect your team to step up. ... To be effective, stress testing needs to be positioned as a service to the person who’s giving the project update. We’re not trying to make them look bad or catch them in what they’re doing wrong. The feedback should be offered and received as data, with no presumption that they have to act on it. ... That fear is rooted in a misunderstanding of how high-performing teams actually work. In traditional leadership models, accountability flows upward: People worry about what the boss will think. In teamship, accountability flows sideways: People worry about letting their peers down.


The Upside Down is Real: What Stranger Things Teaches Us About Modern Cybersecurity

The Upside Down’s danger lies in the unseen portals – the gates and rifts – that allow its monstrous inhabitants, like the Demogorgon and the Mind Flayer, to cross over and wreak havoc in the seemingly safe, familiar world of Hawkins. Today, nearly every business’s hidden reality is its extended attack surface. It’s the sprawling, complex, and often unmanaged network of IT, OT, IoT, medical, cloud systems and beyond that modern organizations rely on. ... For the CISO and security team, this translates directly to the need for full, continuous visibility across every single connected device and system to protect the entire attack surface and manage their organization’s cyber risk exposure in real time. Like the Dungeons and Dragons analogies the kids use to understand the creatures and their tactics, security teams rely on context and intelligence – risk scoring, vulnerability prioritization, and threat analysis – to understand how an asset is connected, why it is vulnerable, and what the most effective countermeasure is. ... First and foremost, cybersecurity requires teamwork, particularly through the fusion of IT, OT, security and business leadership so that they work from a unified view of any risks at hand. It also demands persistence from the dedicated security professionals protecting our digital infrastructure. Most of all, cybersecurity needs to be a proactive and preemptive effort where risk exposures are continuously monitored and threats can be stopped before they ever fully manifest.


Shadow AI: The emerging enterprise risk that can no longer be ignored

With regulatory frameworks tightening and emerging national standards, unsanctioned AI activity can quickly become a governance liability. Instead of reactive controls, organisations are now moving toward multi-layered visibility frameworks: monitoring external AI calls, classifying enterprise assets by sensitivity and tracking unmanaged AI usage. Forward-looking teams are even translating these metrics into financial exposure scores, linking AI misuse to operational, reputational and regulatory impact. Assigning monetary value to Shadow AI risk has proven effective for prioritising mitigation at leadership levels. ... A structured foundation is essential, comprised of trusted assessment frameworks, tested architectural blueprints and scalable AI operating models. Some organisations are pairing these with comprehensive training programs to build AI-literate leaders and teams, ensuring governance evolves alongside capability. This reflects a broader shift: responsible AI has now become the foundation of durable competitive advantage. ... Regulators, global partners and enterprise clients are seeking evidence of formal AI governance models, not just intent. For example, as per the Digital India Act, sectoral data localisation rules and global regulatory momentum are prompting enterprises to strengthen AI auditability, model documentation and workforce training. For many organisations, AI governance has moved from an operational task to a board-level agenda. 


Ireland to make age checks through government app mandatory for social media

The plan is unprecedented among governments legislating online safety, in that it makes downloading the app, designed by the Government’s chief information officer, mandatory for age assurance. Per the Extra report, “if adults refuse to download the digital wallet, they will no longer be able to access their existing social media accounts.” “Mr. O’Donovan said the process of downloading the app might inconvenience someone for ‘three or four minutes’ but this was a small ask in order to protect children online.” O’Donovan has called the harmful effects of social media and other online content on youth a “severe public health issue.” ... Concerns about age assurance technology persist among privacy rights activists. Since age verification and facial age estimation often involves the processing of biometrics, the potential for sensitive data to be exposed is high. And requiring the process to run through a government product is likely to agitate fears about mass surveillance. O’Donovan says the risk to Ireland’s youth is higher. ... “At the end of the day, if the companies have a social conscience and are interested in the protection of children online, I don’t see why anybody who wouldn’t be trading in Ireland, not just domiciled in Ireland, wouldn’t adopt the format that we’re proposing,” he says. “Some of them do have, you know, something bordering on a social conscience, which is to be welcomed. But ­others don’t.”


Secure networking: the foundation for the AI era

Global networks have been under siege for years, but recent attacks are more sophisticated and move at unprecedented speed. Many organizations are still relying on outdated infrastructure, with Cisco research revealing that 48% of network assets worldwide are aging or obsolete. This creates vulnerabilities that attackers eagerly exploit. It’s no longer enough to patch and maintain; a fundamental shift in strategy is required. ... Modern networks typically span solutions and services from a range of different vendors, creating layers of complexity that can quickly overwhelm even experienced IT teams. This complexity often translates into vulnerability, especially when secure configurations aren’t consistently implemented or maintained. For many, simplicity and automation are now mission critical. Businesses increasingly need networks where secure configurations, protocols, and features are enabled by default and adapt automatically. ... Organizations now face the challenge of not only detecting threats quickly, but also responding before vulnerabilities can be exploited. There is an urgent need to reduce the attack surface, remove legacy insecure features, and introduce advanced capabilities for detection and response. ... The next generation of security requires networks to seamlessly provide identity management, deep visibility, integrated detection and protection, and streamlined management, while also incorporating advanced technologies like post-quantum cryptography. 


Ransomware gang’s slip-up led to data recovery for 12 US firms

Researchers at Florida-based Cyber Centaurs said Thursday they took advantage of a lapse in operational security by the gang: They found artifacts left behind by Restic, an legitimate open source backup utility the gang uses to encrypt and exfiltrate victim data into cloud storage environments it controls. Assuming the gang regularly re-uses Restic-based infrastructure led to finding an unnamed cloud storage provider where stolen data was dumped. ... While Restic wasn’t used for exfiltration in this particular attack, Cyber Centaurs suspected the gang regularly used it, based on patterns seen in other incidents. It also suspected the infrastructure the crooks used was unlikely to be dismantled even after negotiations ended or payments were made by corporate victims. With that in mind, the incident response team developed a custom enumeration script to identify certain patterns that identify S3-style cloud bucket infrastructure that the stolen data might be going to. The script ran through a curated list of candidate repository identifiers derived from previously observed Restic artifacts. For each candidate, environment variables were set to match the configuration style used by the threat actor, including the repository endpoint and encryption password. Restic was then instructed to list available snapshots in a structured format, enabling investigators to analyze results without interacting with the underlying data.


The Real Attack Surface Isn’t Code Anymore — It’s Business Users

Traditional AppSec programs are optimized for code stored in repositories, pushed through pipelines, and deployed through CI/CD, not for no-code apps, connectors, and automations created on platforms like Power Platform, ServiceNow, Salesforce, and UiPath. Meanwhile, most organizations assume business-user automations are simple, low-risk, and limited in scope. The reality is more complex. Citizen developers now outnumber traditional software developers by an order of magnitude. Plus, they are wiring together data sources, triggering multi-system workflows, and calling APIs, not just building basic macros or departmental utilities. Because these automations are created outside engineering governance, traditional monitoring tools never see them. ... What emerges is a shadow layer of business logic that sits entirely outside the boundaries of traditional AppSec, DevSecOps, and identity programs. As long as ownership remains fragmented and discovery elusive, security debt continues to grow unchecked. ... We’re entering an era where the most dangerous vulnerabilities aren’t in the code AppDev teams write, but in the thousands of workflows and automations business users build on their own. The sooner organizations recognize and confront the invisible no-code estate, the faster they can reduce the security debt accumulating inside their infrastructure.

Daily Tech Digest - January 22, 2026


Quote for the day:

"Lost money can be found. Lost time is lost forever. Protect what matters most." -- @ValaAfshar



PTP is the New NTP: How Data Centers Are Achieving Real-Time Precision

Precision Time Protocol (PTP) – an approach that is more complex to implement but worth the extra effort, enabling a whole new level of timing synchronization accuracy. ... Keeping network time in sync is important on any network. But it’s especially critical in data centers, which are typically home to large numbers of network-connected devices, and where small inconsistencies in network timing could snowball into major network synchronization problems. ... NTP works very well in situations where networks can tolerate timing inconsistencies of up to a few milliseconds (meaning thousandths of a second). But beyond this, NTP-based time syncing is less reliable due to limitations ... Unlike NTP, PTP doesn’t rely solely on a server-client model for syncing time across networked devices. Instead, it uses time servers in conjunction with a method called hardware timestamping on client devices. Hardware timestamping involves specialized hardware components, usually embedded in network interface cards (NICs), to track time. Central time servers still exist under PTP. But rather than having software on servers connect to the time servers, hardware devices optimized for the task do this work. The devices also include built-in clocks, allowing them to record time data faster than they could if they had to forward it to the generic clock on a server.


Why AI adoption requires a dedicated approach to cyber governance

Today enterprises are facing unprecedented internal pressure to adopt AI tools at speed. Business units are demanding AI solutions to remain competitive, drive efficiency, and innovate faster. But existing cyber governance and third-party risk management processes were never designed to operate at this pace. ... Without modernized cyber governance and AI-ready risk management capabilities, organizations are forced to choose between speed and safety. To truly enable the business, governance frameworks must evolve to match the speed, scale, and dynamism of AI adoption – transforming security from a gatekeeper into a business enabler. ... What’s more, compliance doesn’t guarantee security. DORA, NIS2, and other regulatory frameworks set only minimum requirements and rely on reporting at specific points in time. While these reports are accurate when submitted, they capture only a snapshot of the organization’s security posture, so gaps such as human errors, legacy system weaknesses, or risks from fourth- and Nth-party vendors can still emerge afterward. What’s more, human weakness is always present, and legacy systems can fail at crucial moments. ... While there’s no magic wand, there are tried-and-tested approaches that resolve and mitigate the risks of AI vendors and solutions. Mapping the flow of data around the organization helps reveal how it’s used and resolve blind spots. Requiring AI tools to include references for their outputs ensures that risk decisions are trustworthy and reliable.


What CIOs get wrong about integration strategy and how to fix it

As Gartner advises, business and IT should be equal partners in the definition of integration strategy, representing a radical departure from the traditional IT delivery and business “project sponsorship” model. This close collaboration and shared accountability result in dramatically higher success rates ... A successful integration strategy starts by aligning with the organization’s business drivers and strategic objectives while identifying the integration capabilities that need to be developed. Clearly defining the goals of technology implementation, establishing governance frameworks and decision-making authority and setting standards and principles to guide integration choices are essential. Success metrics should be tied to business outcomes, and the integration approach should support broader digital transformation initiatives. ... Create cross-functional data stewardship teams with authority to make binding decisions about data standards and quality requirements. Document what data needs to be shared between systems, which applications are the “source of truth.” Define and document any regulatory or performance requirements to guide your technical planning. ... Integrations that succeed in production are designed with clear system-of-record rules, traceable transactions, explicit recovery paths and well-defined operational ownership. Preemptive integration is not about reacting faster — it’s about ensuring failures never reach the business.


CFOs are now getting their own 'vibe coding' moment thanks to Datarails

For the modern CFO, the hardest part of the job often isn't the math—it's the storytelling. After the books are closed and the variances calculated, finance teams spend days, sometimes weeks, manually copy-pasting charts into PowerPoint slides to explain why the numbers moved. ... Datarails’ new agents sit on top of a unified data layer that connects these disparate systems. Because the AI is grounded in the company’s own unified internal data, it avoids the hallucinations common in generic LLMs while offering a level of privacy required for sensitive financial data. "If the CFO wants to leverage AI on the CFO level or the organization data, they need to consolidate the data," explained Datarails CEO and co-founder Didi Gurfinkel in an interview with VentureBeat. By solving that consolidation problem first, Datarails can now offer agents that understand the context of the business. "Now the CFO can use our agents to run analysis, get insights, create reports... because now the data is ready," Gurfinkel said. ... "Very soon, the CFO and the financial team themselves will be able to develop applications," Gurfinkel predicted. "The LLMs become so strong that in one prompt, they can replace full product runs." He described a workflow where a user could simply prompt: "That was my budget and my actual of the past year. Now build me the budget for the next year."


The internet’s oldest trust mechanism is still one of its weakest links

Attackers continue to rely on domain names as an entry point into enterprise systems. A CSC domain security study finds that large organizations leave this part of their attack surface underprotected, even as attacks become more frequent. ... Large companies continue to add baseline protections, though adoption remains uneven. Email authentication shows the most consistent improvement, driven by phishing activity and regulatory pressure. Organizations still leave email domains partially protected, which allows spoofing to persist. Other protections see much slower uptake. ... Consumer oriented registrars tend to emphasize simplicity and cost. Organizations that rely on them often lack access to protections that limit the impact of account compromise or social engineering. Risk increases as domain portfolios grow and change. ... Brand impersonation through domain spoofing remains widespread. Lookalike domains tied to major brands are often owned by third parties. Some appear inactive while still supporting email activity. Inactive domains with mail records allow attackers to send phishing messages that appear associated with trusted brands. Others are parked with advertising networks or held for later use. A smaller portion hosts malicious content, though dormant domains can be activated quickly. ... Gaps appear in infrastructure related areas. DNS redundancy and registry lock adoption lag, and many unicorns rely on consumer grade registrars. These limitations become more pronounced as operations scale.


Misconfigured demo environments are turning into cloud backdoors to the enterprise

Internal testing, product demonstrations, and security training are critical practices in cybersecurity, giving defenders and everyday users the tools and wherewithal to prevent and respond to enterprise threats. However, according to new research from Pentera Labs, when left in default or misconfigured states, these “test” and “demo” environments are yet another entry point for attackers — and the issue even affects leading security companies and Fortune 500 companies that should know better. ... After identifying an exposed instance of Hackazon, a free, intentionally vulnerable test site developed by Deloitte, during a routine cloud security assessment for a client, Yaffe performed a five-step hunt for exposed apps. His team uncovered 1,926 “verified, live, and vulnerable applications,” more than half of which were running on enterprise-owned infrastructure on AWS, Azure, and Google Cloud platforms. They then discovered 109 exposed credential sets, many accessible via a low-priority lab environment, tied to overly-privileged identity access management (IAM) roles. These often granted “far more access” than a ‘training’ app should, Yaffe explained, and provided attackers:Administrator-level access to cloud accounts, as well as full access to S3 buckets, GCS, and Azure Blob Storage; The ability to launch and destruct compute resources and read and write to secrets managers; Permissions to interact with container registries where images are stored, shared, and deployed.


Cyber Insights 2026: API Security – Harder to Secure, Impossible to Ignore

“We’re now entering a new API boom. The previous wave was driven by cloud adoption, mobile apps, and microservices. Now, the rise of AI agents is fueling a rapid proliferation of APIs, as these systems generate massive, dynamic, and unpredictable requests across enterprise applications and cloud services,” comments Jacob Ideskog ... The growing use of agentic AI systems and the way they act autonomously, making decisions and triggering workflows, is ballooning the number of APIs in play. “It isn’t just ‘I expose one billing API’,” he continues, “now there are dozens of APIs that feed data to LLMs or AI agents, accept decisions from AI agents, facilitate orchestration between services and micro-apps, and potentially expose ‘agentic’ endpoints ... APIs have been a major attack surface for years – the problem is ongoing. Starting in 2025 and accelerating through 2026 and beyond, the rapid escalation of enterprise agentic AI deployments will multiply the number of APIs and increase the attack surface. That alone suggests that attacks against APIs will grow in 2026. But the attacks themselves will scale and be more effective through adversaries’ use of their own agentic AI. Barr explains: “Agentic AI means that bad actors can automate reconnaissance, probe API endpoints, chain API calls, test business-logic abuse, and execute campaigns at machine scale. Possession of an API endpoint, particularly a self-service, unconstrained one, becomes a lucrative target. And AI can generate payloads, iterate quickly, bypass simple heuristics, and map dependencies between APIs.”


Complex VoidLink Linux Malware Created by AI

An advanced cloud-first malware framework targeting Linux systems was created almost entirely by artificial intelligence (AI), a move that signals significant evolution in the use of the technology to develop advanced malware. VoidLink — comprised of various cloud-focused capabilities and modules and designed to maintain long-term persistent access to Linux systems — is the first case of wholly original malware being developed by AI, according to Check Point Research, which discovered and detailed the malware framework last week. While other AI-generated malware exists, it's typically "been linked to inexperienced threat actors, as in the case of FunkSec, or to malware that largely mirrored the functionality of existing open-source malware tools," ... The malware framework, linked to a suspected, unspecified Chinese actor, includes custom loaders, implants, rootkits, and modular plug-ins. It also automates evasion as much as possible by profiling a Linux environment and intelligently choosing the best strategy for operating without detection. Indeed, as Check Point researchers tracked VoidLink in real time, they watched it transform quickly from what appeared to be a functional development build into a comprehensive, modular framework that became fully operational in a short timeframe. However, while the malware itself was high-functioning out of the gate, VoidLink's creator proved to be somewhat sloppy in their execution.


What’s causing the memory shortage?

Right now, the industry is suffering the worst memory shortage in history, and that’s with three core suppliers: Micron Technology, SK Hynix, and Samsung. TrendForce, a Taipei-based market researcher that specializes in the memory market, recently said it expects average DRAM memory prices to rise between 50% and 55% this quarter compared to the fourth quarter of 2025. Samsung recently issued a similar warning. So what caused this? Two letters: AI. The rush to build AI-oriented data centers has resulted in virtually all of the memory supply being consumed by data centers. AI requires massive amounts of memory to process its gigantic data sets. A traditional server would usually come with 32 GB to 64 GB of memory, while AI servers have 128 GB or more. ... There are other factors at play here, too, of course. The industry is in a transition period between DDR4 and DDR5, as DDR5 comes online and DDR4 fades away. These transitions to a new memory format are never quick or easy, and it usually take years to make a full shift. There has also been increased demand from both client and server sides. With Microsoft ending support for Windows 10, a whole lot of laptops are being replaced with Windows 11 systems, and new laptops come with DDR5 memory — the same memory used in an AI server. ... “What’s likely to happen, from a market perspective, is we’ll see the market grow less in ’26 than we had anticipated, but ASPs are likely to stay or increase. ...” he said.


OpenAI CFO Comments Signal End of AI Hype Cycle

By focusing on “practical adoption,” OpenAI can close the gap between what AI now makes possible and how people, companies, and countries are using it day to day. “The opportunity is large and immediate, especially in health, science, and enterprise, where better intelligence translates directly into better outcomes,” she noted. “Infrastructure expands what we can deliver,” she continued. “Innovation expands what intelligence can do. Adoption expands who can use it. Revenue funds the next leap. This is how intelligence scales and becomes a foundation for the global economy.” The framing reflects a shift from big-picture AI promise to day-to-day deployment and measurable results. ... There’s also a gap between what AI can do and how people are actually using it in daily life, noted Natasha August, founder of RM11, a content monetization platform for creators in Carrollton, Texas. “AI tools are incredibly powerful, but for many people and businesses, it’s still unclear how to turn that power into something practical like saving time, making money, or improving how they work,” she told TechNewsWorld. In business, the gap lies between AI’s raw analytical capabilities and its ability to drive tangible, repeatable business outcomes, maintained Nithin Mummaneni ... “The winning play is less ‘AI that answers’ and more ‘AI that completes tasks safely and predictably,'” he continued. “Adoption happens when AI becomes part of the workflow, not a separate destination.”