Daily Tech Digest - December 02, 2025


Quote for the day:

"I am not a product of my circumstances. I am a product of my decisions." -- Stephen Covey



The CISO’s paradox: Enabling innovation while managing risk

When security understands revenue goals, customer promises and regulatory exposure, guidance becomes specific and enabling. Begin by embedding a security liaison with each product squad so there is always a known face to engage in identity, data flows, logging and encryption decisions as they form. We should not want to see engineers opening two-week tickets for a simple question. There should be open “office hours,” chat channels and quick calls so they can get immediate feedback on decisions like API design, encryption requirements and regional data moves. ... Show up at sprint planning and early design reviews to ask the questions that matter — authentication paths, least-privilege access, logging coverage and how changes will be monitored in production through SIEM and EDR. When security officers sit at the same table, the conversation changes from “Can we do this?” to “How do we do this securely?” and better outcomes follow from day one. ... When developers deploy code multiple times a day, a “final security review” before launch just wouldn’t work. This traditional, end-of-line gating model doesn’t just block innovation but also fails to catch real-world risks. To be effective, security must be embedded during development, not just inspected after. ... This discipline must further extend into production. Even with world-class DevSecOps, we know a zero-day or configuration drift can happen. 


Resilience Means Fewer Recoveries, Not Faster Ones

Resilience has become one of the most overused words in management. Leaders praise teams for “pushing through” and “bouncing back,” as if the ability to absorb endless strain were proof of strength. But endurance and resilience are not the same. Endurance is about surviving pressure. Resilience is about designing systems so people don’t break under it. Many organizations don’t build resilience; they simply expect employees to endure more. The result is a quiet crisis of exhaustion disguised as dedication. Teams appear committed but are running on fumes. ... In most organizations, a small group carries the load when things get tough — the dependable few who always say yes to the most essential tasks. That pattern is unsustainable. Build redundancy into the system by cross-training roles, rotating responsibilities, and decentralizing authority. The goal isn’t to reduce pressure to zero; it’s to distribute it evenly enough so that no one person becomes the safety net for everyone else. ... Too many managers equate resilience with recovery, celebrating those who saved the day after the crisis is over. But true resilience shows up before the crisis hits. Observe your team to recognize the people who spot problems early, manage risks quietly, or improve workflows so that breakdowns don’t happen. Crisis prevention doesn’t create dramatic stories, but it builds the calm, predictable environment that allows innovation to thrive.


Facial Recognition’s Trust Problem

Surveillance by facial recognition is almost always in a public setting, so it’s one-to-many. There is a database and many cameras (usually a large number of cameras – an estimated one million in London and more than 30,000 in New York). These cameras capture images of people and compare them to the database of known images to identify individuals. The owner of the database may include watchlists comprising ‘people of interest’, so the ability to track persons of interest from one camera to another is included. But the process of capturing and using the images is almost always non-consensual. People don’t know when, where or how their facial image was first captured, and they don’t know where their data is going downstream or how it is used after initial capture. Nor are they usually aware of the facial recognition cameras that record their passage through the streets. ... Most people are wary of facial recognition systems. They are considered personally intrusive and privacy invasive. Capturing a facial image and using it for unknown purposes is not something that is automatically trusted. And yet it is not something that can be ignored – it’s part of modern life and will continue to be so. In the two primary purposes of facial recognition – access authentication and the surveillance of public spaces – the latter is the least acceptable. It is used for the purpose of public safety but is fundamentally insecure. What exists now can be, and has been, hijacked by criminals for their own purposes. 


The Urgent Leadership Playbook for AI Transformation

Banking executives talk enthusiastically about AI. They mention it frequently in investor presentations, allocate budgets to pilot programs, and establish innovation labs. Yet most institutions find themselves frozen between recognition of AI’s potential and the organizational will to pursue transformation aggressively. ... But waiting for perfect clarity guarantees competitive disadvantage. Even if only 5% of banks successfully embed AI across operations — and the number will certainly grow larger — these institutions will alter industry dynamics sufficiently to render non-adopters progressively irrelevant. Early movers establish data advantages, algorithmic sophistication, and operational efficiencies that create compounding benefits difficult for followers to overcome. ... The path from today’s tentative pilots to tomorrow’s AI-first institution follows a proven playbook developed by "future-built" companies in other sectors that successfully generate measurable value from AI at enterprise scale. ... Scaling AI requires reimagining organizational structures around technology-human collaboration based on three-layer guardrails: agent policy layers defining permissible actions, assurance layers providing controls and audit trails, and human responsibility layers assigning clear ownership for each autonomous domain.


Creative cybersecurity strategies for resource-constrained institutions

There’s a well-worn phrase that gets repeated whenever budgets are tight: “We have to do more with less.” I’ve never liked it because it suggests the team wasn’t already giving maximum effort. Instead, the goal should be to “use existing resources more effectively.” ... When you understand the users’ needs and learn how they want to work, you can recommend solutions that are both secure and practical. You don’t need to be an expert in every research technology. Start by paying attention to the services offered by cloud providers and vendors. They constantly study user pain points and design tools to address them. If you see a cloud service that makes it easier to collect, store, or share scientific data, investigate what makes it attractive. ... First, understand how your policies and controls affect the work. Security shouldn’t be developed in a vacuum. If you don’t understand the impact on researchers, developers, or operational teams, your controls may not be designed and implemented in a manner that helps enable the business. Second, provide solutions, don’t just say no. A security team that only rejects ideas will be thought of as a roadblock, and users will do their best to avoid engagement. A security team that helps people achieve their goals securely becomes one that is sought out, and ultimately ensures the business is more secure.


Architecting Intelligence: A Strategic Framework for LLM Fine-Tuning at Scale

As organizations race to harness the transformative power of Large Language Models, a critical gap has emerged between experimental implementations and production-ready AI systems. While prompt engineering offers a quick entry point, enterprises seeking competitive advantage must architect sophisticated fine-tuning pipelines that deliver consistent, domain-specific intelligence at scale. The landscape of LLM deployment presents three distinct approaches for fine-tuning, each with architectural implications. The answer lies in understanding the maturity curve of AI implementation. ... Fine-tuning represents the architectural apex of AI implementation. Rather than relying on prompts or external knowledge bases, fine-tuning modifies the AI model itself by continuing its training on domain-specific data. This embeds organizational knowledge, reasoning patterns, and domain expertise directly into the model’s parameters. Think of it this way: a general-purpose AI model is like a talented generalist who reads widely but lacks deep expertise. ... The decision involves evaluating several factors. Model scale matters because larger models generally offer better performance but demand more computational resources. An organization must balance the quality improvements of a 70-billion-parameter model against the infrastructure costs and latency implications. 


How smart tech innovation is powering the next generation of the trucking industry

Real-time tracking has now become the backbone of digital trucking. These systems provide the real time updates on the location of vehicles, fuel consumption, driving behavior and performance of the engine. Fleets also make informed decisions based on data to directly influence operational efficiencies. Furthermore, the IoT-enabled ‘one app’ solution monitors cargo temperature, location, and overall load conditions throughout the journey. ... Now, with AI driven algorithms, fleet managers anticipate most optimal routes via analysis of historical demand, weather patterns, and traffic. AI-powered intelligent route optimization applications allow fleets to optimize fuel usage and lower travel times. Additionally, with predictive maintenance capabilities, trucking companies are less concerned about vehicle failures, because a more proactive approach is used. AI tools spot anomalies in engine data and warn the fleet owners before expensive vehicle failure occurs, improving the overall fleet operations. ... The trucking industry is transforming faster than ever before. Technologies are turning every vehicle into a connected network and digital asset. Fleets can forecast demand, optimize routes, preserve cargo quality, and ensure safety at every step. The smarter goals align seamlessly with cost saving opportunities as logistics aggregators transition from the manual heavy paperwork to the digital locker convenience


Why every business needs to start digital twinning in 2026

Digital twins have begun to stand out because they’re not generic AI stand-ins; at their best they’re structured behavioural models grounded in real customer data. They offer a dependable way to keep insights active, consistent and available on demand. That is where their true strategic value lies. More granularly, the best performing digital twins are built on raw existing customer insights – interview transcripts, survey results, and behavioural data. But rather than just summarising the data, they create a representation of how a particular individual tends to think. Their role isn’t to imitate someone’s exact words, but to reflect underlying logic, preferences, motivations and blind spots. ... There’s no denying the fact that organisations have had a year of big promises and disappointing AI pilots, with the result that businesses are far more selective about what genuinely moves the needle. For years, digital twinning has been used to model complex systems in engineering, aerospace and manufacturing, where failure is expensive and iteration must happen before anything becomes real. With the rise of generative AI, the idea of a digital twin has expanded. After a year of rushed AI pilots and disappointing ROI, leaders are looking for approaches that actually fit how businesses work. Digital twinning does exactly that: it builds on familiar research practices, works inside existing workflows, and lets teams explore ideas safely before committing to them.


From compliance to confidence: Redefining digital transformation in regulated enterprises

Compliance is no longer the brake on digital transformation. It is the steering system that determines how fast and how far innovation can go. ... Technology rarely fails because of a lack of innovation. It fails when organizations lack the governance maturity to scale innovation responsibly. Too often, compliance is viewed as a bottleneck. It’s a scalability accelerator when embedded early. ... When governance and compliance converge, they unlock a feedback loop of trust. Consider a payer-provider network that unified its claims, care and compliance data into a single “truth layer.” Not only did this integration reduce audit exceptions by 45%, but it also improved member-satisfaction scores because interactions became transparent and consistent. ... No transformation from compliance to confidence happens without leadership alignment. The CIO sits at the intersection of technology, policy and culture and therefore carries the greatest influence over whether compliance is reactive or proactive. ... Technology maturity alone is not enough. The workforce must trust the system. When employees understand how AI or analytics systems make decisions, they become more confident using them. ... Confidence is not the absence of regulation; it’s mastery of it. A confident enterprise doesn’t fear audits because its systems are inherently explainable. 


AI agents are already causing disasters - and this hidden threat could derail your safe rollout

Although artificial intelligence agents are all the rage these days, the world of enterprise computing is experiencing disasters in the fledgling attempts to build and deploy the technology. Understanding why this happens and how to prevent it is going to involve lots of planning in what some are calling the zero-day deliberation. "You might have hundreds of AI agents running on a user's behalf, taking actions, and, inevitably, agents are going to make mistakes," said Anneka Gupta, chief product officer for data protection vendor Rubrik. ... Gupta talked about more than just a product pitch. Fixing well-intentioned disasters is not the biggest agent issue, she said. The big picture is that agentic AI is not moving forward as it should because of zero-day issues. "Agent Rewind is a day-two issue," said Gupta. "How do we solve for these zero-day issues to start getting people moving faster -- because they are getting stuck right now." ... According to Gupta, the true problem of agent deployment is all the work that begins with the chief information security officer, CISO, the chief information officer, CIO, and other senior management to figure out the scope of agents. AI agents are commonly defined as artificial intelligence programs that have been granted access to resources external to the large language model itself, enabling the AI program to carry out a wider variety of actions. ... The real zero-day obstacle is how to understand what agents are supposed to be doing, and how to measure what success or failure would look like.

Daily Tech Digest - December 01, 2025


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart



Engineers for the future: championing innovation through people, purpose and progress

Across the industry, Artificial Intelligence (AI) and automation are transforming how we design, build and maintain devices, while sustainability targets are prompting businesses to rethink their operations. The challenge for engineers today is to balance technological advancement with environmental responsibility and people-centered progress. ... The industry faces an ageing workforce, so establishing new pathways into engineering has become increasingly important. Diversity, Equity & Inclusion (DE&I) initiatives play an essential role here, designed to attract more women and under-represented groups into the field. Building teams that reflect a broader mix of backgrounds and perspectives does more than close the skills gap: it drives creativity and strengthens the innovation needed to meet future challenges in areas such as AI and sustainability. Engineering has always been about solving problems, but today’s challenges, from digital transformation to decarbonization, demand an ‘innovation mindset’ that looks ahead and designs for lasting impact. ... The future of engineering will not be defined by one technological breakthrough. It will be shaped by lots of small, deliberate improvements – smarter maintenance, data-driven decisions, lower emissions, recyclability – that make systems more efficient and resilient. Progress will come from engineers who continue to refine how things work, linking technology, sustainability and human insight. 


Why data readiness defines GenAI success

Enterprises are at varying stages of maturity. Many do not yet have the strong data foundation required to support scaling AI, especially GenAI. Our Intelligent Data Management Cloud (IDMC) addresses this gap by enabling enterprises to prepare, activate, manage, and secure their data. It ensures that data is intelligent, contextual, trusted, compliant, and secure. Interestingly, organisations in regulated industries tend to be more prepared because they have historically invested heavily in data hygiene. But overall, readiness is a journey, and we support enterprises across all stages. ... The rapid adoption of agents and AI models has dramatically increased governance complexity. Many enterprises already manage tens of thousands of data tasks. In the AI era, this scales to tens of thousands of agents as well. The solution lies in a unified metadata-driven foundation. An enterprise catalog that understands entities, relationships, policies, and lineage becomes the single source of truth. This catalog does not require enterprises to consolidate immediately; it can operate across heterogeneous catalogs, but the more an enterprise consolidates, the more complexity shifts from people and processes into the catalog itself. Auto-cataloging is critical. Automatically detecting relationships, lineage, governance rules, compliance requirements, and quality constraints reduces manual overhead and ensures consistency. 


12 signs the CISO-CIO relationship is broken — and steps to fix it

“It’s critical that those in these two positions get along with each other, and that they’re not only collegial but collaborative,” he says. Yes, they each have their own domain and their own set of tasks and objectives, but the reality is that each one cannot get that work done without the other. “So they have to rely on one another, and they have to each recognize that they must rely on each other.” Moreover, it’s not just the CIO and CISO who suffer when they aren’t collegial and collaborative. Palmore and other experts say a poor CIO-CISO relationship also has a negative impact on their departments and the organization as a whole. “A strained CIO-CISO relationship often shows up as misalignment in goals, priorities, or even communication,” says Marnie Wilking, CSO at Booking.com. ... CIOs and CISOs both have incentives to improve a problematic relationship. As Lee explains, “The CIO-CISO relationship is critical. They both have to partner effectively to achieve the organization’s technology and cybersecurity goals. All tech comes with cybersecurity exposure that can impact the successful implementation of the tech and business outcomes; that’s why CIOs have to care about cybersecurity. And CISOs have to know that cybersecurity exists to achieve business outcomes. So they have to work together to achieve each other’s priorities.” CISOs can take steps to develop a better rapport with their CIOs, using the disruption happening today


Meeting AI-driven demand with flexible and scalable data centers

Analysts predict that by 2030, 80 percent of the AI workloads will be for inference rather than training, which led Aitkenhead to say that the size of the inference capacity expansion is “just phenomenal”. Additionally, neo cloud companies such as CoreWeave and G‑Core are now buying up large volumes of hyperscale‑grade capacity to serve AI workloads. To keep up with this changing landscape, IMDC is ensuring that it has access to large amounts of carbon-free power and that it has the flexible cooling infrastructure that can adapt to customers’ requirements as they change over time. ... The company is adopting a standard data center design that can accommodate both air‑based and water‑based cooling, giving customers the freedom to choose any mix of the two. The design is deliberately oversized (Aitkenhead said it can provide well over 100 percent of the cooling capacity initially needed) so it can handle rising rack densities. ... This expansion is financed entirely from Iron Mountain’s strong, cash‑generating businesses, which gives the data center arm the capital to invest aggressively while improving cost predictability and operational agility. With a revamped design construction process and a solid expansion strategy, IMDC is positioning itself to capture the surging demand for AI‑driven, high‑density workloads, ensuring it can meet the market’s steep upward curve and remain “exciting” and competitive in the years ahead.


AI Agents Lead The 8 Tech Trends Transforming Enterprise In 2026

Step aside chatbots; agents are the next stage in the evolution of enterprise AI, and 2026 will be their breakout year. ... Think of virtual co-workers, always-on assistants monitoring and adjusting processes in real-time, and end-to-end automated workflows requiring minimal human intervention. ... GenAI is moving rapidly from enterprise pilots to operational adoption, transforming knowledge workflows; generating code for software engineers, drafting contracts for legal teams, and creating schedules and action plans for project managers. ... Enterprise organizations are outgrowing generic cloud platforms and increasingly looking to adopt Industry Cloud Platforms (ICP), offering vertical solutions encompassing infrastructure, applications and data. ... This enterprise trend is driven by both the proliferation of smart, connected IoT devices and the behavioral shift to remote and hybrid working. The zero-trust edge (ZTE) concept refers to security functionality built into edge devices, from industrial machinery to smartphones, via cloud platforms, to ensure consistent administration of security functionality. ... Enterprises are responding by adopting green software engineering principles for carbon efficiency and adopting AI to monitor their activities. In 2026, the strategy is “green by design”, reflecting the integration of sustainability into enterprise DNA.


Preparing for the Quantum Future: Lessons from Singapore

While PQC holds promise, it faces challenges such as larger key sizes, the need for side-channel-resistant implementations, and limited adoption in standard protocols like Transport Layer Security (TLS) and Secure Shell (SSH). ... In contrast to PQC, QKD takes a different approach: instead of relying on mathematics, it uses the laws of quantum physics to generate and exchange encryption keys securely. If an attacker tries to intercept the key exchange, the quantum state changes, revealing the intrusion. The strength of this approach is that it is not based on mathematics and, therefore, cannot be broken because cracking it does not depend on an algorithm. QKD is specifically useful for strategic sites or large locations with important volumes of data transfers. ... Nation-scale strategies for quantum-safe networks are vital to prepare for Q-Day and ensure protection against quantum threats. To this end, Singapore has started a program called the National Quantum Safe Network (NQSN) to build a nationwide testbed and platform for quantum-safe technologies using a real-life fibre network. ... In a step towards securing future quantum threats, ST Engineering is also developing a Quantum-Safe Satellite Network for cross-border applications, supported by mobile and fixed Quantum Optical Ground Stations (Q-OGS). Space QKD will complement terrestrial QKD to form a global quantum-safe network. The last mile, which is typically copper cable, will rely on PQC for protection.


Superintelligence: Should we stop a race if we don’t actually know where the finish line is?

The term ‘superintelligence’ encapsulates the concerns raised. It refers to an AI system whose capabilities would surpass those of humans in almost every field: logical reasoning, creativity, strategic planning and even moral judgement. However, in reality, the situation is less clear-cut: no one actually knows what such an entity would be like, or how to measure it. Would it be an intelligence capable of self-improvement without supervision? An emerging consciousness? Or simply a system that performs even more efficiently than our current models? ... How can a pause be enforced globally when the world’s major powers have such divergent economic and geopolitical interests? The United States, China and the European Union are in fierce competition to dominate the strategic sector of artificial intelligence; slowing down unilaterally would risk losing a decisive advantage. However, for the signatories, the absence of international coordination is precisely what makes this pause essential.  ... Researchers themselves recognise the irony of the situation: they are concerned about a phenomenon that they cannot yet describe. Superintelligence is currently a theoretical concept, a kind of projection of our anxieties and ambitions. But it is precisely this uncertainty that warrants caution. If we do not know the exact nature of the finish line, should we really keep on racing forward without knowing what we are heading for?


Treating MCP like an API creates security blind spots

APIs generally don’t cause arbitrary, untrusted code to run in sensitive environments. MCP does though, which means you need a completely different security model. LLMs treat text as instructions, they follow whatever you feed them. MCP servers inject text into that execution text. ... Security professionals might also erroneously assume that they can trust all clients registering with their MCP servers, this is why the MCP spec is updating. MCP builders will have to update their code to receive the additional client identification metadata, as dynamic client registration and OAuth alone are not always enough.  Another trust model that is misunderstood is when MCP users confuse vendor reputation with architectural trustworthiness. ... Lastly, and most importantly, MCP is a protocol (not a product). And protocols don’t offer a built-in “trust guarantee.” Ultimately, the protocol only describes how servers and clients communicate through a unified language. ... Risks can also emerge from the names of tools within MCP servers. If tool names are too similar, the AI model can become confused and select the wrong tool. Malicious actors can exploit this in an attack vector known as Tool Impersonation or Tool Mimicry. The attacker simply adds a tool within their malicious server that tricks the AI into using it instead of a similarly named legitimate tool in another server you use. This can lead to data exfiltration, credential theft, data corruption, and other costly consequences. 


Ontology is the real guardrail: How to stop AI agents from misunderstanding your business

Building effective agentic solutions requries an ontology-based single source of truth. Ontology is a business definition of concepts, their hierarchy and relationships. It defines terms with respect to business domains, can help establish a single-source of truth for data and capture uniform field names and apply classifications to fields. An ontology may be domain-specific (healthcare or finance), or organization-specific based on internal structures. Defining an ontology upfront is time consuming, but can help standardize business processes and lay a strong foundation for agentic AI. ... Agents designed in this manner and tuned to follow an ontology can stick to guardrails and avoid hallucinations that can be caused by the large language models (LLM) powering them. For example, a business policy may define that unless all documents associated with a loan do not have verified flags set to "true," the loan status should be kept in “pending” state. Agents can work around this policy and determine what documents are needed and query the knowledge base. ... With this method, we can avoid hallucinations by enforcing agents to follow ontology-driven paths and maintain data classifications and relationships. Moreover, we can scale easily by adding new assets, relationships and policies that agents can automatically comply to, and control hallucinations by defining rules for the whole system rather than individual entities. 


The end of apps? Imagining software’s agentic future

Enterprise software vendors are scrambling to embed agents into existing applications. Oracle Corp. claims to have more than 600 embedded AI agents in its Fusion Cloud and Industry Applications. SAP says it has more than 40.  ... This shift is not simply about embedding AI into existing products, as generative AI is supplanting conventional menus and dashboards. It’s a rethinking of software’s core functions. Many experts working on the agentic future say the way software is built, packaged and used is about to change profoundly. Instead of being a set of buttons and screens, software will become a collaborator that interprets goals, orchestrates processes, adapts in real time and anticipates what users need based on their behavior and implied preferences. ... The coming changes to enterprise software will go beyond the interface. AI will force monolithic software stacks to give way to modular, composable systems stitched together by agents using standards such as the Model Control Protocol, the Agent2Agent Protocol and the Agent Communication Protocol that IBM Corp. recently donated to the Linux Foundation. “By 2028, AI agent ecosystems will enable networks of specialized agents to dynamically collaborate across multiple applications, allowing users to achieve goals without interacting with each application individually,” Gartner recently predicted.

Daily Tech Digest - November 30, 2025


Quote for the day:

"The real leader has no need to lead - he is content to point the way." -- Henry Miller



Four important lessons about context engineering

Modern LLMs operate with context windows ranging from 8K to 200K+ tokens, with some models claiming even larger windows. However, several technical realities shape how we should think about context. ... Research has consistently shown that LLMs experience attention degradation in the middle portions of long contexts. Models perform best with information placed at the beginning or end of the context window. This isn’t a bug. It’s an artifact of how transformer architectures process sequences. ... Context length impacts latency and cost quadratically in many architectures. A 100K token context doesn’t cost 10x a 10K context, it can cost 100x in compute terms, even if providers don’t pass all costs to users. ... The most important insight: more context isn’t better context. In production systems, we’ve seen dramatic improvements by reducing context size and increasing relevance. ... LLMs respond better to structured context than unstructured dumps. XML tags, markdown headers, and clear delimiters help models parse and attend to the right information. ... Organize context by importance and relevance, not chronologically or alphabetically. Place critical information early and late in the context window. ... Each LLM call is stateless. This isn’t a limitation to overcome, but an architectural choice to embrace. Rather than trying to maintain massive conversation histories, implement smart context management


What Fuels AI Code Risks and How DevSecOps Can Secure Pipelines

AI-generated code refers to code snippets or entire functions produced by Machine Learning models trained on vast datasets. While these models can enhance developer productivity by providing quick solutions, they often lack the nuanced understanding of security implications inherent in manual coding practices. ... Establishing secure pipelines is the backbone of any resilient development strategy. When code flows rapidly from development to production, every step becomes a potential entry point for vulnerabilities. Without careful controls, even well-intentioned automation can allow flawed or insecure code to slip through, creating risks that may only surface once the application is live. A secure pipeline ensures that every commit, every integration, and every deployment undergo consistent security scrutiny, reducing the likelihood of breaches and protecting both organizational assets and user trust. Security in the pipeline begins at the earliest stages of development. By embedding continuous testing, teams can identify vulnerabilities before they propagate, identifying issues that traditional post-development checks often miss. This proactive approach allows security to move in tandem with development rather than trailing behind it, ensuring that speed does not come at the expense of safety. 


The New Role of Enterprise Architecture in the AI Era

Traditional architecture assumes predictability in which once the code has shipped, systems behave in a standard way. On the contrary, AI breaks that assumption completely, given that the machine learning models continuously change as data evolves and model performance keeps fluctuating as every new dataset gets added. ... Architecture isn’t just a phase in the AI era; rather it’s a continuous cycle that must operate across various interconnected stages that follow well-defined phases. This process starts with discovery, where the teams assess and identify AI opportunities that are directly linked to the business objectives. Engage early with business leadership to define clear outcomes. Next comes design, where architects create modular blueprints for data pipelines and model deployment by reusing the proven patterns. In the delivery phase, teams execute iteratively with governance built in from the onset. Ethics, compliance and observability should be baked into the workflows, not added later as afterthoughts. Finally, adaptation keeps the system learning. Models are monitored, retrained and optimized continuously, with feedback loops connecting system behavior back to business metrics and KPIs (key performance indicators). When architecture operates this way, it becomes a living ecosystem that learns, adapts and improves with every iteration.


Quenching Data Center Thirst for Power Now Is Solvable Problem

“Slowing data center growth or prohibiting grid connection is a short-sighted approach that embraces a scarcity mentality,” argued Wannie Park, CEO and founder of Pado AI, an energy management and AI orchestration company, in Malibu, Calif. “The explosive growth of AI and digital infrastructure is a massive engine for economic, scientific, and industrial progress,” he told TechNewsWorld. “The focus should not be on stifling this essential innovation, but on making data centers active, supportive participants in the energy ecosystem.” ... Planning for the full lifecycle of a data center’s power needs — from construction through long-term operations — is essential, he continued. This approach includes having solutions in place that can keep facilities operational during periods of limited grid availability, major weather events, or unexpected demand pressures, he said. ... The ITIF report also called for the United States to squeeze more power from the existing grid without negatively impacting customers, while also building new capacity. New technology can increase supply from existing transmission lines and generators, the report explained, which can bridge the transition to an expanded physical grid. On the demand side, it added, there is spare capacity, but not at peak times. It suggested that large users, such as data centers, be encouraged to shift their demand to off-peak periods, without damaging their customers. Grids do some of that already, it noted, but much more is needed.


A Waste(d) Opportunity: How can the UK utilize data center waste heat?

Walking into the data hall, you are struck by the heat resonating from the numerous server racks, each capable of handling up to 20kW of compute. However, rather than allowing this heat to dissipate into the atmosphere, the team at QMUL had another plan. Instead, in partnership with Schneider Electric, the university deployed a novel heat reuse system. ... Large water cylinders across campus act like thermal batteries, storing hot water overnight when compute needs are constant, but demand is low, then releasing it in the morning rush. As one project lead put it, there is “no mechanical rejection. All the heat we generate here is used. The gas boilers are off or dialed down - the computing heat takes over completely.” At full capacity, the data center could supply the equivalent of nearly 4 million ten-minute showers per year. ... Walking out, it’s easy to see why Queen Mary’s project is being held up as a model for others. In the UK, however, the project is somewhat of an oddity, but through the lens of QMUL you can see a glimpse of the future, where compute is not only solving the mysteries of our universe but heating our morning showers. The question remains, though, why data center waste heat utilization projects in the UK are few and far between, and how the country can catch up to regions such as the Nordics, which has embedded waste heat utilization into the planning and construction of its data center sector.


Redefining cyber-resilience for a new era

The biggest vulnerability is still the human factor, not the technology. Many companies invest in expensive tools but overlook the behaviour and mindset of their teams. In regions experiencing rapid digital growth, that gap becomes even more visible. Phishing, credential theft and shadow IT remain common ways attackers gain access. What’s needed is a shift in culture. Cybersecurity should be seen as a shared responsibility, embedded in daily routines, not as a one-time technical solution. True resilience begins with awareness, leadership and clarity at all levels of the organisation. ... Leaders play a crucial role in shaping that future. They need to understand that cybersecurity is not about fear, but about clarity and long-term thinking. It is part of strategic leadership. The leaders who make the biggest impact will be the ones who see cybersecurity as cultural, not just technical. They will prioritise transparency, invest in ethical and explainable technology, and build teams that carry these values forward. ... Artificial Intelligence is already transforming how we detect and respond to threats, but the more important shift is about ownership. Who controls the infrastructure, the models and the data? Centralised AI, controlled by a few major companies, creates dependence and limits transparency. It becomes harder to know what drives decisions, how data is used and where vulnerabilities might exist.


Building Your Geopolitical Firewall Before You Need One

In today’s world, where regulators are rolling out data sovereignty and localization initiatives that turn every cross-border workflow into a compliance nightmare, this is no theoretical exercise. Service disruption has shifted from possibility to inevitability, and geopolitical moves can shut down operations overnight. For storage engineers and data infrastructure leaders, the challenge goes beyond mere compliance – it’s about building genuine operational independence before circumstances force your hand. ... The reality is messier than any compliance framework suggests. Data sprawls everywhere, from edge, cloud and core to laptops and mobile devices. Building walls around everything does not offer true operational independence. Instead, it’s really about having the data infrastructure flexibility to move workloads when regulations shift, when geopolitical tensions escalate, or when a foreign government’s legislative reach suddenly extends into your data center. ... When evaluating sovereign solutions, storage engineers typically focus on SLAs and certifications. However, Oostveen argues that the critical question is simpler and more fundamental: who actually owns the solution or the service provider? “If you’re truly sovereign, my view is that you (the solution provider) are a company that is owned and operated exclusively within the borders of that particular jurisdiction,” he explains.


The 5 elements of a good cybersecurity risk assessment

Companies can use a cybersecurity risk assessment to evaluate how effective their security measures are. This provides a foundation for deciding which security measures are important — and which are not. But also for deciding when a product or system is secure enough and additional measures would be excessive. When they’ve done enough cybersecurity. However, not every risk assessment fulfills this promise. ... Too often, cybersecurity risk assessments take place solely in cyberspace — but this doesn’t allow meaningful prioritizing of requirements. “Server down” is annoying, but cyber systems never exist for their own sake. That’s why risk assessments need a connection to real processes that are mission critical for the organization — or perhaps not. ... Without system understanding, there is no basis for attack modeling. Without attack modeling, there is no basis for identifying the most important requirements. It shouldn’t really be cybersecurity’s job to create system understanding. But since there is often a lack of documentation in IT, OT, or for cyber systems in general, cybersecurity is often left to provide it. And if cybersecurity is the first team to finally create an overview of all cyber systems, then it’s a result that is useful far beyond security risk assessment. ... Attack scenarios are a necessary stepping stone to move your thinking from systems and real-world impacts to meaningful security requirements — no more and no less. 


Finding Strength in Code, Part 2: Lessons from Loss and the Power of Reflection

Every problem usually has more than one solution. The engineers who grow the fastest are the ones who can look at their own mistakes without ego, list what they’re good at and what they're not, and then actually see multiple ways forward. Same with life. A loss (a pet, a breakup, whatever) is a bug that breaks your personal system. ... Solo debugging has limits. On sprawling systems, we rally the squad—frontend, backend, QA—to converge faster. Similarly, grief isn't meant for isolation. I've leaned on my network: a quick Slack thread with empathetic colleagues or a vulnerability share in my dev community. It distributes the load and uncovers blind spots you might miss on your own. ... Once a problem is solved, it is essential to communicate the solution. The list of lessons from that solution: some companies solve problems, but never put the effort into documenting the process in a way that prevents them from happening again. I know it is impossible to avoid problems, as it is impossible not to make mistakes in our lives. The true inefficiency? Skipping the "why" and "how next time." ... Borrowed from incident response, it's a structured debrief that prevents recurrence without finger-pointing. In engineering, it ensures resilience; in life, it builds emotional antifragility. There are endless flavours of postmortems—simple Markdown outlines to full-blown docs—but the gold standard is "blameless," focusing on systems over scapegoats.


Cyber resilience is a business imperative: skills and strategy must evolve

Cyber upskilling must be built into daily work for both technical and non-technical employees. It’s not a one-off training exercise; it’s part of how people perform their roles confidently and securely. For technical teams, staying current on certifications and practicing hands-on defense is essential. Labs and sandboxes that simulate real-world attacks give them the experience needed to respond effectively when incidents happen. For everyone else, the focus should be on clarity and relevance. Employees need to understand exactly what’s expected of them; how their individual decisions contribute to the organization's resilience. Role-specific training makes this real: finance teams need to recognize invoice fraud attempts; HR should know how to handle sensitive data securely; customer service needs to spot social engineering in live interactions. ... Resilience should now sit alongside financial performance and sustainability as a core board KPI. That means directors receiving regular updates not only on threat trends and audit findings, but also on recovery readiness, incident transparency, and the cultural maturity of the organization's response. Re-engaging boards on this agenda isn’t about assigning blame—it’s about enabling smarter oversight. When leaders understand how resilience protects trust, continuity, and brand, cybersecurity stops being a technical issue and becomes what it truly is: a measure of business strength.

Daily Tech Digest - November 29, 2025


Quote for the day:

"Whenever you see a successful person you only see the public glories, never the private sacrifices to reach them." -- Vaibhav Shah



6 coding myths that refuse to die

A typical day as a developer can feel like you’re juggling an array (no pun intended) of tasks. You’re reading vague requirements, asking questions, reviewing designs, planning architecture, investigating bugs, reading someone else's code, writing documentation, attending standups, and occasionally, you actually get to write code. Why? Because software development is about problem-solving, not just code-producing. Real-world problems are messy. Users don’t always know what they want. Clients change their minds. Systems behave in mysterious ways. Before you even think about writing code, you often need to untangle the people-side and the process-side. ... The truth is that coding rewards persistence, curiosity, and willingness to improve far more than raw talent. Most developers I’ve worked with weren’t prodigies. They were people who kept showing up, kept asking questions, and kept refining their skills. ... Every working developer, no matter how experienced, looks up syntax constantly. We search the docs, we skim examples, we peek at old code, we search for things we’ve forgotten. Nobody expects you to memorize every keyword, operator, or built-in function. What matters in programming is the ability to break down a problem, think through the logic, and design a solution. Syntax is simply the tool you use to express that solution. It’s the grammar, not the message. So don't make this programming mistake and myth waste your time. 


The Cost of Doing Nothing: Why Unstructured Data Is Draining IT Budgets

Think of it this way: the fundamental problem contemporary enterprises have with unstructured data isn’t actually the volume they own but the lack of visibility into what exists, where it resides, who owns it, and whether it still holds value. In this context, the only alternative they have is to store everything indefinitely, including redundant, obsolete, or trivial data that serves no business purpose. The key question here, of course, is how to manage data through its lifecycle? Ideally, an effective and strategic data management process should begin by establishing a single, enterprise-wide view of unstructured data to uncover inefficiencies and risks.  ... Lifecycle management plays a central role in this, with files that have not been accessed for an extended period of time can be moved to lower-cost storage, while data that has been inactive for many years can be archived or deleted altogether. Many organizations discover that more than 60% of their stored information falls into these categories, illustrating just how much wasted capacity can be reclaimed with a policy-driven approach. ... It’s an approach that also benefits from the integration of vendor-neutral data management platforms capable of integrating data across diverse storage environments and clouds, eliminating lock-in while maintaining scalability. The outcome is greater cost control, improved compliance posture, and stronger decision-making foundations across the enterprise.


Agentic AI is supercharging the deepfake crisis: How companies can take action

As agentic AI propels fraud to a whole new level, the best way to keep your company secure is by fighting fire with fire, or in this case, AI with AI. To do so, companies need to implement multi-layered AI defense strategies that make it exponentially harder for bad actors to succeed. Enterprises can’t rely on traditional verification methods that add more layers of friction or collect more personal data as that would deter customers. Instead, businesses need to rethink digital identity protection to reduce fraud and fraud-related losses, but to also preserve customer trust and digital engagement. To achieve this, organizations’ defense systems should contextualize individual actions, granularly isolate scopes of impact, and rely on ongoing reassessments of authorization. In other words, a highly secure system doesn’t just check a user’s identity once but continuously evaluates what the user is doing, where they are doing it, and why they are doing it. ... Using layered risk signals throughout the lifecycle of users—not just during onboarding— can provide companies with detailed information on potential risks, especially from internal sources like employees who can be fouled or whose access can be hijacked to compromise a company’s key assets. Companies can continuously check the reputation of users’ email addresses, phone numbers, and IP addresses to see if any of those channels have previously been used for fraudulent activity, identifying fraud rings that are deploying AI agents at scale. 


Cyber resilience, AI & energy shape IT strategies for 2026

The historical approach - that of considering cyber resilience as a stand-alone issue, where one vendor can protect an entire company - will be put to bed. Organisations will move away from using point solutions and embrace the wider ecosystem of options as understanding grows that they can't go it alone. An interconnected framework can help prevent a ripple effect when an attack happens - users should be able to identify and halt an attack in progress. The rate and scale of attacks will continue and having a properly integrated framework is vital to mitigate risk and speed up recovery. ... As AI inference workloads are becoming part of the production workflow, organisations are going to have to ensure their infrastructure supports not just fast access but high availability, security and non-disruptive operations. Not doing this will be costly both from a results perspective and an operational perspective in terms of resource (GPUs) utilisation. ... By 2026, organisations will face a new problem: accounts and credentials that belong to people no longer with the company, but which still look and act like insiders. As HR and IT systems become more automated, old identities are easily missed. Accounts from former employees, departed contractors, and dormant service bots will linger in cloud environments and company software. Attackers will exploit these 'digital ghosts' because they appear legitimate, bypass automated offboarding, and blend in with normal system activity.


Enterprises are neglecting backup plans, and experts warn it could come back to haunt them

Crucially, only 45% consistently follow the ‘3-2-1’ backup rule - three copies of data, stored on two different media types, with one copy kept off-site. The same number are failing to keep tamper-proof copies by using immutability across all their organizational backup data to ensure resilience against cyber attacks. ... "Most organizations now recognize the need to identify phishing scams or social engineering tactics; however, we can’t lose sight of what to do when disaster does strike. While complete prevention is near impossible, assurance of rapid recovery is fully within organizational control," he said. "Our research shows that UK organizations still aren’t taking adequate precautions when it comes to data backups. By storing data on immutable platforms, they can ensure business-critical information remains beyond the reach of adversaries and that operations stay up and running, even when systems are compromised." ... Backup strategies are now front of mind for many IT professions, alternative research shows. A survey from Kaseya earlier this year found 30% are losing sleep over lackluster backup and recovery strategies, with some pushing for a stronger focus on this area. Complacency was also identified as a recurring problem for many enterprises, according to Kaseya. Nearly two-thirds (60%) of respondents said they believed they could fully recover from a data loss incident in the space of a day.


Ransomware Moves: Supply Chain Hits, Credential Harvesting

Attack volume remains high. The quantity of victims listed across ransomware groups' data leak sites increased by one-third from September to October, says a report from cybersecurity firm Cyble. Groups listing the most victims included high-fliers Qilin and Akira, newcomer Sinobi - which only appeared in July - and stalwarts INC Ransom and Play. ... After a run of attacks targeting zero-day flaws in managed file transfer software, the group used the same strategy against Oracle E-Business Suite versions 12.2.3 through 12.2.14 to steal data. Clop appears to have targeted two zero-day vulnerabilities, "both of which allow unauthenticated access to core EBS components," giving the group "a fast and reliable entry point, which explains the scale of the campaign," said cybersecurity firm SOCRadar. Oracle issued updates fixing both of those flaws. Data theft tied to that campaign appeared to begin by August, although it didn't come to light until Clop revealed it ... One of the big reasons for ransomware's success has been cryptocurrency, which makes it easier for groups to monetize and cash out their attacks. Another has been the rise of the ransomware-as-a-service business model. This allows for specialization: operators can develop malware and shake down victims, while affiliated business partners focus on hacking, rather than malware development, with both reaping the rewards. Every time a victim pays a ransom, the industry standard is for an affiliate to keep 70% to 80%.


Essential 2026 skills that DevOp leaders need to prioritize

It may sound radical, but you should prepare for a future where DevOps professionals will no longer need to learn programming languages. The DevOps role will shift up more than most people expect, enabling your team members to become supervisory architects rather than hands-on coders. ... DevOps professionals will no longer need to rely on programming languages. Instead, they will use natural language to supervise and orchestrate processes across requirements, planning, development, testing, and deployment. This leads to the elimination of hand-offs between teams and a significant blurring of traditional roles. ... However, for this shift-up to be truly successful and safe in practice, that foundational knowledge of software engineering principles remains vital. Without understanding the why behind what you are asking AI to do, your team cannot evaluate the quality of the output. This lack of evaluation can easily lead to significant risks, such as vulnerabilities that result in security breaches. In the age of AI, human judgment remains as important as ever, but only if it’s informed by a deep understanding of what the AI is being asked to produce. ... As a leader, your challenge is to guide your organization through this transformative period. The future of software development isn’t about AI replacing humans; it’s about AI empowering humans to perform at a higher, more strategic level. 


Building the Future: AI’s Role in Enterprise Evolution

The biggest obstacle we see for AI adoption isn't the technology itself, but the lack of clarity on the purpose for using it. The most critical part of any AI initiative is to understand why you want to use AI and how it can enhance your organisation’s unique attributes. There is no one-size-fits-all approach, since what works for one organisation may not work for others. A healthcare business needs data privacy for patient records, while a small startup’s goal is agility to release new product and sign new deals. These use cases will require different infrastructure investments and most workloads are not suited to the public cloud. ... Consider AI with a broader view, beyond just the technology itself. Dell approaches AI with three distinct perspectives in mind: the business side, the technical side and the people side. GenAI will provide a 20-30 per cent increase in productivity, eliminating mundane tasks and freeing people to focus on higher value work. Your employees are now available to use that extra time to reimagine processes and outcomes, creating value and efficiencies for the company.
From a people standpoint, the demand for curious, smart, adaptable employees will skyrocket. ... Many of our customers are in the early stages of their AI journey, experimenting with basic applications. Small and basic can have a big impact, so keep pushing forward. It's worth starting with pilot projects as they give you room to test and experiment with an application. 


We Need to Teach the ‘Inuit’ Mindset to Young Computing Engineers

Becoming accustomed to over-provisioned resources has brought further concerns. The decreasing cost of hardware encourages a certain complacency: if a code is inefficient in memory or CPU usage, one tends to trust that a more powerful machine or extra memory will solve the problem. ... This mindset contrasts with the traditional discipline of programming education, in which every instruction and every byte mattered, and optimization was an essential part of the computer science student’s training. The point here is that even while leveraging the benefits offered by AI in programming, an excessive dependence on AI-generated solutions and the over-provisioning of resources can undermine the proper development of computational, logical, and algorithmic thinking in future programmers or computing scientists. ... It is important to clarify that this is not about rejecting the use of AI and reverting to a former era of computing. Instead, we should integrate the best of both worlds. We must harness the tremendous potential of AI while instilling in students the ability to evaluate and improve solutions using their own sound judgement. As a direct consequence, a well-trained programmer will think twice before accepting an AI-generated solution if it uses resources disproportionately or does not guarantee adequate resilience when execution scenarios change drastically. 


Your Platform is Not an Island: Embracing Evolution in Your Ecosystem

The challenges facing smaller organizations versus larger organizations are really quite different, and the very requirement for a platform is typically indicative of you having multiple teams, so you probably don't really need a platform in a startup, particularly if you've got one 10-star full-stack developer wearing all of those hats. ... On-premises dependencies for your app will increase the number of interfaces and contributes to what we lovingly call application sprawl, and overly distributed architectures. The more teams that you have, the more people that you're probably going to need to speak to, and unfortunately, that means an increased number of working practices, and probably it's going to be far harder to reach any kind of consensus. If you work in a large organization, I'm sure that will resonate with you. ... The more features that you try to predict ahead of time, the more you risk building something that your customers actually don't want. The more minimal your MVP, the more likely your customers will see it as a motel, not a hotel. ... Developers still needed infrastructure knowledge, when we'd kind of sold that vision that they wouldn't need any, they would need little baseline understanding of Kubernetes. Integration with other legacy services across the organization, because they weren't designed by us and didn't always have APIs, was a little bit clunky. 

Daily Tech Digest - November 28, 2025


Quote for the day:

"Whenever you find yourself on the side of the majority, it is time to pause and reflect." -- Mark Twain



Security researchers caution app developers about risks in using Google Antigravity

“In Antigravity,” Mindgard argues, “’trust’ is effectively the entry point to the product rather than a conferral of privileges.” The problem, it pointed out, is that a compromised workspace becomes a long-term backdoor into every new session. “Even after a complete uninstall and re-install of Antigravity,” says Mindgard, “the backdoor remains in effect. Because Antigravity’s core intended design requires trusted workspace access, the vulnerability translates into cross-workspace risk, meaning one tainted workspace can impact all subsequent usage of Antigravity regardless of trust settings.” For anyone responsible for AI cybersecurity, says Mindguard, this highlights the need to treat AI development environments as sensitive infrastructure, and to closely control what content, files, and configurations are allowed into them. ... Swanda recommends that app development teams building AI agents with tool-calling: assume all external content is adversarial. Use strong input and output guardrails, including tool calling; Strip any special syntax before processing; implement tool execution safeguards. Require explicit user approval for high-risk operations, especially those triggered after handling untrusted content or other dangerous tool combinations; not rely on prompts for security. System prompts, for example, can be extracted and used by an attacker to influence their attack strategy. 


How AI Is Rewriting The Rules Of Work, Leadership, And Human Potential

When a CEO tells his team, "AI is coming for your jobs, even mine," you pay attention. It is rare to hear that level of blunt honesty from any leader, let alone the head of one of the world's largest freelance platforms. Yet this is exactly how Fiverr co-founder and CEO Micha Kaufman has chosen to guide his company through the most significant technological shift of our lifetimes. His blunt assessment: AI is coming for everyone's jobs, and the only response is to get faster, more curious, and fundamentally better at being human. ... We're applying AI to existing workflows and platforms, seeing improvements, but not yet experiencing the fundamental restructuring that's coming. "It is mostly replacing the things we used to do as human beings, acting as robots," Kaufman observes. The repetitive tasks, the research gathering, the document summarizing, these elements where humans brought judgment but little humanity are being automated first. ... It's not enough to use the obvious AI tools in obvious ways. The real value emerges from those who push boundaries, combine systems creatively, or bring exceptional judgment to AI-assisted workflows. Kaufman points to viral videos created with advanced AI tools, noting that their quality stems not from the AI itself but from the operator's genius, experience, creativity, and taste developed over years.


How ‘digital twins’ could help prevent cyber-attacks on the food industry

A digital twin is a virtual replica of any product, process, or service, capturing its state, characteristics, and connections with other systems throughout its life cycle. The digital twin will include the computer system used by the company. It can help because conventional defences are increasingly out of step with cyber-attacks. Monitoring tools tend to detect anomalies after damage occurs. Complex computer systems can often obscure the origins of breaches. A digital twin creates a bridge between the physical and digital worlds. It allows organisations to simulate real-time events, predict what might happen next, and safely test potential responses. It can also help analyse what happened after a cyber-attack to help companies prepare for future incidents. ... A digital twin might be able to avert disaster under this scenario. By combining operational data such as temperature, humidity, or the speed air of flow with internal computing system data or intrusion attempts, digital twins offer a unified view of both system performance and cybersecurity. They enable organisations to simulate cyber-attacks or equipment failures in a safe, controlled digital environment, revealing vulnerabilities before attackers can exploit them. A digital twin can also detect abnormal temperature patterns, monitor the system for malicious activity, and perform analysis after a cyber-attack to identify the causes.


Why password management defines PCI DSS success

When you dig into real incidents involving payment data, a surprising number come down to poor password hygiene. PCI DSS v4.0 raised the bar for authentication, and the responsibility sits with security leaders to turn those requirements into workable daily habits for users and admins. ... Requirement 8 asks organizations to verify the identity of every user with strong authentication, make sure passwords and passphrases meet defined strength rules, prevent credential reuse, limit attempts, and store credentials securely. Passwords need to be at least 12 characters long, or at least 8 characters when a system cannot support longer strings. These rules line up with guidance from NIST SP 800 63B, which recommends longer passphrases, resistance against common word lists and hashing methods that protect stored secrets. ... PCI DSS requires that access be traceable to an individual and that shared accounts be minimized and controlled. When passwords live across multiple channels, it becomes nearly impossible to show auditors reliable evidence of access history. Even if the team is trying hard, the workflow itself creates gaps that no policy document can fix. ... Some CISOs view password managers as convenience tools. PCI DSS v4.0 shows that they are closer to compliance tools because they make it possible to enforce identity controls across an organization.



AI fluency in the enterprise: Still a ‘horseless carriage’

Companies are tossing AI agents onto existing processes, but a transformative change — where AI is the boss — is still far away. That was the view of IT leaders at this year’s Microsoft Ignite conference who’ve been putting AI agents to work, mostly with legacy processes. The IT leaders discussed their efforts during a conference panel at the event earlier this month. “We’re probably living in some version of the horseless carriage — we haven’t got to the car yet,” said John Whittaker, director of AI platform and products at accounting and consulting firm EY. ... Pfizer is very process-centric, he said, stressing that the goal is not to reinvent processes right out of the gate. The company is analyzing how AI works for them, gaining confidence in the technology before reorganizing processes within the AI lens. “Where we’re definitely heading … is thinking about, ‘I’ve solved this process, I’ve been following exactly the way it exists today. Now let’s blow it up and reimagine it…’ — and that’s exciting,” he said. ... Lumen is now looking at where it wants the business to be in 36 months and linking it to AI agents and AI-native plans. “We’re … working back from that and ensuring that we have the right set of tools, the right set of training, and the right set of agents in order to enable that,” he said. Every new Lumen employee in Alexander’s connected ecosystem group gets a Copilot license. The technology has helped speed up the process of understanding acronyms and historical trends within the company.


Creating Impactful Software Teams That Continuously Improve

When you are a person who prefers your job to be strictly defined, with clear boundaries, then you feel supported instead of stifled by a boss who checks in on you regularly. In the same culture, you will feel relaxed, happy, and content, which will in turn allow you to bring your best to your job and deliver to your strengths, Žabkar Nordberg said. You do not want to have employees who will be extensions of yourself, Žabkar Nordberg said. Instead, you want people who will bring their own thoughts, their own solutions, and in many ways be different and better than yourself. ... Provide guidance, step away, and let people have autonomy within those constraints. You might say something like "I would like you to focus on improving our customer retention. Be aware that legal regulations require all steps in our current onboarding journey to be present, but we have flexibility in how we execute them as the user experience is not prescribed". This gives people guidance and focuses them, but still gives them the autonomy to bring their own experiences and find their own solutions. ... We want people to show initiative and proactively bring their own thoughts, improvements, and worries. Clear communication and an understanding of how people work will help them do that, Žabkar Nordberg said. Psychological safety underlines trust, autonomy, and communication; it is required for them to work effectively, he concluded.


Empathetic policy engineering: The secret to better security behavior and awareness

Insecure behavior is often blamed on users, when the problem often lies in the measure itself. In IT security research, the focus is often on individual user behavior — for example, on whether secure behavior depends on personality traits. The question of how well security measures actually fit the reality of work — that is, how likely they are to be accepted in everyday practice — is neglected. For every threat, there are usually several available security measures. But differences in effort, acceptance, compatibility, or complexity are often not taken into account in practice. Instead, security or IT departments often make decisions based solely on technical aspects. ... Safety measures and guidelines are often communicated in a way that doesn’t resonate with users’ work reality because they don’t aim to engage employees and motivate them: for example, through instructions, standard online training, or overly playful formats like comics that employees don’t take seriously. ... The limited success of many security measures is not solely due to the users — often it’s unrealistic requirements, a lack of involvement, and inadequate communication. For security leaders, this means: Instead of relying on education and sanctions, a strategic paradigm shift is needed. They should become a kind of empathetic policy architect whose security strategy not only works technically but also resonates on a human level.


Agentic AI is not ‘more AI’—it’s a new way of running the enterprise

Agentic AI marks a shift from simply predicting outcomes or offering recommendations to systems that can plan tasks, take actions and learn from the results within defined guardrails. In practical terms, this means moving beyond isolated, single-task copilots towards coordinated “swarms” of agents that continually monitor signals, trigger workflows across systems, negotiate constraints and complete loops with measurable outcomes. ... A major barrier is trust and control. Leaders remain cautious about allowing software to take autonomous actions. Graduated autonomy provides a path forward: beginning with assistive tools, moving to supervised autonomy with reversible actions and eventually deploying narrow, fully autonomous loops when KPIs and rollback mechanisms have been validated. Lack of clarity on value is another obstacle. Impressive demonstrations do not constitute a strategy. Organisations should use a jobs-to-be-done perspective and tie each agent to a specific financial or risk objective, such as days-sales-outstanding, mean time to resolution, inventory turns or claims leakage. Analysts have warned that many agentic initiatives will be cancelled if value remains vague, so clear scorecards and time-boxed proofs of value are essential. Data readiness is a further challenge. Weak lineage, uncertain ownership and inconsistent quality stop AI scaling efforts in their tracks.


6 strategies for CIOs to effectively manage shadow AI

“Be clear which tools and platforms are approved and which ones aren’t,” he says. “Also be clear which scenarios and use cases are approved versus not, and how employees are allowed to work with company data and information when using AI like, for example, one-time upload as opposed to cut-and-paste or deeper integration.” ... “The most important thing is creating a culture where employees feel comfortable sharing what they use rather than hiding it,” says Fisher. His team combines quarterly surveys with a self-service registry where employees log the AI tools they use. IT then validates those entries through network scans and API monitoring. ... “Effective inventory management requires moving beyond periodic audits to continuous, automated visibility across the entire data ecosystem,” he says, adding that good governance policies ensure all AI agents, whether approved or built into other tools, send their data in and out through one central platform. ... “Risk tolerance should be grounded in business value and regulatory obligation,” says Morris. Like Fisher, Morris recommends classifying AI use into clear categories, what’s permitted, what needs approval, and what’s prohibited, and communicating that framework through leadership briefings, onboarding, and internal portals. ... Transparency is the key to managing shadow AI well. Employees need to know what’s being monitored and why.


It’s Time to Rethink Access Control for Modern Development Environments

When faced with the time-consuming complexity of managing granular permissions across dozens of development tools, most VPs of Engineering and CTOs opt for the path of least resistance, granting broad administrative privileges to entire engineering teams. It’s understandable from a productivity standpoint; nobody wants to be a bottleneck when a critical release is imminent, or explain to the CEO why they missed a market window because a developer couldn’t access a repository. However, when everyone has admin privileges, attackers who gain access to just one set of credentials can do tremendous damage. They gain not just access to sensitive code and data, but the ability to manipulate build processes, insert malicious code, or establish persistent backdoors. This problem becomes even more dangerous when combined with the prevalence of shadow IT, non-human identities, and contractor relationships operating outside your security perimeter. ... The answer to stronger security that doesn’t hinder developer productivity lies in implementing just-in-time permissioning within the SDLC, a concept successfully adopted from cloud infrastructure management that can transform how we handle development access controls. The approach is straightforward: instead of granting permanent administrative access to everyone, take 90 days to observe what developers actually need to do their jobs, then right-size their permissions accordingly.