Showing posts with label risk assessment. Show all posts
Showing posts with label risk assessment. Show all posts

Daily Tech Digest - January 15, 2026


Quote for the day:

"You have to have your heart in the business and the business in your heart." -- An Wang


AI agents can talk — orchestration is what makes them work together

“Agent-to-agent communications is emerging as a really big deal,” G2’s chief innovation officer Tim Sanders told VentureBeat. “Because if you don't orchestrate it, you get misunderstandings, like people speaking foreign languages to each other. Those misunderstandings reduce the quality of actions and raise the specter of hallucinations, which could be security incidents or data leakage.” ... In another critical evolution in the agentic era, human evaluators will become designers, moving from human-in-the-loop to human-on-the-loop, according to Sanders. That is: They will begin designing agents to automate workflows. Agent builder platforms continue to innovate their no-code solutions, Sanders said, meaning nearly anyone can now stand up an agent using natural language. “This will democratize agentic AI, and the super skill will be the ability to express a goal, provide context and envision pitfalls, very similar to a good people manager today.” ... Organizations should begin “expeditious programs” to infuse agents across workflows, especially with highly repetitive work that poses bottlenecks. Likely at first, there will be a strong human-in-the-loop element to ensure quality and promote change management. “Serving as an evaluator will strengthen the understanding of how these systems work,” Sanders said, “and eventually enable all of us to operate upstream in agentic workflows instead of downstream.”


Integrating AI-Enhanced Microservices in SAFe 5.0 Framework

AI-driven microservices can be a game-changer for Lean Portfolio Management within SAFe. By optimizing decision analytics and enhancing value stream performance, AI simplifies, rather than complicates. I know what you’re thinking: AI tools can add complexity. One client put this to the test, and we found AI helped reduce the noise. It sliced through the data smog to identify hidden value streams and automate mundane tasks like financial forecasting and risk management. ... Integrating decentralized AI models into SAFe’s ARTs can significantly enhance their autonomy. During a high-stakes project, we shifted from a centralized to a decentralized model, which allowed ARTs to self-optimize and adapt to shifting priorities seamlessly. It was like giving ARTs a brain of their own. Decentralized AI models reduce the bottlenecks you'd typically encounter in centralized systems. Think of the ARTs as small startups within the larger enterprise ecosystem, each capable of making swift, informed decisions. ... This isn’t just a tech enthusiast's dream—it's an emerging reality. The maturity of AI technologies spells a future where enterprises aren’t just keeping up; they’re setting the pace. So, if there’s a single, actionable insight to glean from my journey, it’s this: enterprises need to actively pursue cross-industry collaborations, invest in AI-powered microservices, and hone their Agile professionals’ skill sets.


Incorporating Geopolitical Risk Into Your IT Strategy

IT organizations know how to plan for unexpected outages, but even the most rigorously designed strategy is vulnerable to the shifting winds of geopolitics. CIOs and technology leaders need to know how their organizations will respond to geopolitical disruptions, and scenario planning needs to be a priority. ... "The IT department can treat geopolitical disruption as an expected operational variable rather than an unforeseen catastrophe. Good and tested enterprise risk management frameworks, investment in government affairs partnerships and ongoing board engagement should start to manage and prepare for this," Dixon said. CIOs need to do scenario modeling around the risks facing their enterprise, and evaluate how IT is teaming with business units, security teams and the CISO on a cohesive tech strategy that builds security, including artificial intelligence security, in from the ground up, said Sean Joyce ... "You're as strong as your weakest link," Joyce said. "As geopolitical risk becomes more prominent, you're going to see tools like cyber being leveraged by countries, particularly those that don't have stronger military or other capabilities. For some, it may be the only tool they can leverage." Physical infrastructure, geography and power supplies are also now areas of risk CIOs need to consider, and infrastructure strategy must align with sustainability, energy realities and geopolitical stability. 


Six Architecture Challenges for Startups

The risk is not that the first version is imperfect; that is inevitable. The risk is that the team keeps layering new functionality on top of an accidental architecture. At some point, the cost of change becomes so high that every small modification feels dangerous. The architectural challenge is to intentionally decide where to accept debt and where to invest in structure. Startups need a minimal set of principles – for example, clear domain boundaries, basic API hygiene, and a simple deployment model – that allow speed without locking the product into a dead end. ... If the product team is still validating pricing models, redefining the customer journey, or experimenting with different verticals, any rigid decomposition can turn into friction. Yet avoiding boundaries altogether leads to a “big ball of mud” that is equally hard to evolve. A practical approach is to use provisional boundaries based on current value streams – onboarding, transaction processing, analytics, etc. – and treat them as hypotheses. The challenge is not to find the perfect structure from day one, but to keep those boundaries explicit and adjustable as the business model evolves. ... Startups must make conscious decisions about where they are comfortable being tightly coupled to a provider and where they need portability. That requires viewing cloud services through a business lens: What is strategic IP, what is replaceable, and what is pure commodity? Aligning these categories with architectural choices is a non-trivial design challenge, not just a procurement decision. 


Platform-as-a-Product: Declarative Infrastructure for Developer Velocity

Without centralized guardrails, teams often compensate by over-allocating resources "to be safe", leading to inconsistent environments and unnecessary cloud spend that is only discovered after deployment. ... What is missing is a developer-friendly abstraction that brings these related concerns together. Developers need a way to express intent (not only what infrastructure is required, but also how the application should be built, deployed, configured across environments, secured, and sized) without having to implement the mechanics of each underlying system. From a platform engineering perspective, this abstraction represents the core of an internal developer platform and can be implemented as a lightweight Python-based platform framework. ... The platform comprises several interconnected components. GitLab pipelines coordinate everything, pulling code from repositories, building and unit testing applications (with tests written by developers), checking security, creating cloud infrastructure with Terraform/IaC, and deploying to Kubernetes clusters with Puppet configuration management. The configuration YAML file controls all of this, telling each component what to do. The architecture clearly separates concerns: the CI pipeline handles code building, testing, and vulnerability scanning. CD pipeline handles deployment: creating cloud resources, updating Kubernetes, and configuring environments. 


(Re)introducing Adaptive Business Continuity

Adaptive BC is designed to provide a framework that delivers better outcomes when organizations deal with losses. The result may be a reduction in documentation (something I greatly favor) but that is not a stated goal. ... My experience over the years has led me to conclude that trying to define priorities for the resumption of services is wasted effort. Many activities can take place in parallel, and priorities will change when disasters occur. A perfect example is the governmental lockdowns and health authority mandates that followed the emergence of COVID. The result is that demand for products and services changed drastically, upending previous priorities. Priorities may be defined following adaptive principles, but it is not at all a stated component of the Adaptive framework. ... For a number of reasons, I would like to see the word “plan” used a lot less within our profession. Seeing the word “strategy” in its place would be a step in the right direction. Strategy improvement is not, however, a key outcome of Adaptive BC efforts. There is some benefit to having clearly defined recovery strategies, but strategies only provide benefit to competent and empowered teams armed with the resources they need to carry out the mission. For this reason, I always emphasize the importance of focusing efforts on capabilities and consider plans and strategies as little more than supporting tools for any business continuity program. The improvement of strategies and/or plans is simply not an expected outcome of Adaptive BC work.


Exactly What To Automate With AI In 2026 For Faster Business Growth

Most founders automate the wrong things. They start with the flashy stuff, the complicated tools and fancy dashboards, while ignoring the repetitive tasks quietly draining their hours. But you need faster, cleaner growth by removing friction from the activities that actually grow your business. ... You shouldn't embark on a day's worth of admin tasks every time a new client says yes. It will only slow you down. Make it easy for them to pay, get a receipt, complete an onboarding form, and submit the required information. On your end, have the Google Drive folders, follow-up emails, and team briefings set up without you lifting a finger. Question everything you currently do manually. There is no reason it couldn't be an AI agent handling the sequence. All the tools you pay for already have integrations with each other; You're just not using them. The goal is that you could sign client after client because onboarding takes minutes, not hours. ... AI-generated content is awful when you use it wrong. But that doesn't mean you shouldn't involve AI in your content production process. Content still matters in marketing, whether long-form articles, videos, or social media visuals. You need to be part of the conversation, but only with relevant, authentic material. You cannot outproduce everyone manually, so use automations and retain your human genius for the finishing touches. ... The more your life admin runs on autopilot, the more you free up time and energy for your business. 


What is AI fuzzing? And what tools, threats and challenges generative AI brings

The way traditional fuzzing works is you generate a lot of different inputs to an application in an attempt to crash it. Since every application accepts inputs in different ways, that requires a lot of manual setups. Security testers would then run these tests against their companies’ software and systems to see where they might fail. ... Today, generative artificial intelligence has the potential to automate this previously manual process, coming up with more intelligent tests, and allowing more companies to do more testing of their systems. ... But there’s a third angle involved here. What if, instead of trying to break traditional software, the target was an AI-powered system? This creates unique challenges because AI chatbots are not predictable and can respond differently to the same input at different times. ... AI fuzzing can also help speed up the discovery of vulnerabilities, Roy says. “Traditionally, testing was always a function of how many days and weeks you had to test the system, and how many testers you could throw at the testing,” he says. “With AI, we can expand the scale of the testing.” ... Another use of AI in fuzzing is that it takes more than a set of test cases to fully test an application — you also need a mechanism, a harness, to feed the test cases into the app, and in all the nooks and crannies of the application. “If the fuzzing harness does not have good coverage, then you may not uncover vulnerabilities through your fuzzing,” says Dane Sherrets, staff innovations architect for emerging technologies at HackerOne


CISOs flag gaps in third-party risk management

CISOs rank third-party cyber risk among their highest-impact threats. Vendor relationships touch nearly every core business function, from cloud infrastructure and software development to data processing and AI services. Each added dependency expands the attack surface and increases the number of organizations involved in protecting sensitive systems and data. ... Only a small portion of organizations report visibility across third-, fourth-, and nth-party relationships. Most operate with partial insight limited to direct vendors or a narrow segment of the extended supply chain. CISOs say limited visibility complicates incident response, risk prioritization, and compliance planning. When a breach emerges several layers removed from a known vendor, security teams may struggle to understand exposure, timelines, and downstream impact. ... CISOs report rising regulatory scrutiny tied to third-party cyber risk. Regulatory frameworks place greater expectations on organizations to demonstrate oversight across vendor ecosystems, including indirect relationships. Only a minority of organizations feel ready to meet upcoming requirements without major changes. Most report progress underway, with further work needed to align processes, tooling, and internal coordination. Third-party risk management involves legal, procurement, compliance, and executive leadership alongside security teams. ... At the same time, AI adoption accelerates within vendor risk management itself. 


Anti-fragility – what is it and why should it be the goal for your organisation?

That ability to thrive in the face of disruption must become the basis for improved resilience. Modern organisations shouldn’t strive for survival, but for continual improvement. In the cyber sphere, that is crucial. Threat actors are constantly changing tack, targeting new CVEs, and executing increasingly complicated supply chain attacks. Resilience must therefore move in tandem as an ongoing process of learning and adapting. That is the crux of anti-fragility. It defines systems that thrive and improve from stress, volatility, disorder and shocks, rather than just resisting them. If a security model is only designed to recover, it remains just as vulnerable as before. But an anti-fragile approach actively benefits from each attack, identifying weaknesses, addressing them, and adapting as needed. ... Increasingly, organisations are recognising the value in anti-fragility as a strategy and more will adopt it next year. However, getting there means going beyond regulatory compliance. Compliance lays the foundations from which successful cybersecurity can be built, yet many currently see it as the finished structure. There are several problems with that. Security legislation frequently lags behind the threat landscape, and so the gap between a new threat emerging and a new law coming in to address it can stretch over the course of years. Organisations must therefore understand that compliance doesn’t equal protection. 

Daily Tech Digest - January 13, 2026


Quote for the day:

"Don't let yesterday take up too much of today." -- Will Rogers



When AI Meets DevOps To Build Self-Healing Systems

Self-healing systems do not just react to events and incidents — they analyse historic data, identify early triggers or symptoms of failures, and act. For example, if a service is known to crash when it runs out of memory, a self-healing system can observe metrics like memory consumption, predict when the service may fail with very low memory, and take action to fix the issue—like restarting the service or allocating more memory—without human intervention. In AIOps, self-healing systems are powered by data science in terms of machine learning models, real-time analytics, and automated workflows. ... Self-healing systems don’t just rely on static rules and manual checks; they utilise real-time data streams and apply pattern and anomaly detection through machine learning to ascertain the state of the environment. A self-healing system is trying to gauge its own health all the time — CPU utilisation, latency, memory, throughput, traffic, security anomalies, etc — to preemptively address an impending failure. The key component of every self-healing system is a cycle that reflects the process followed by intelligent agents: Detect → Diagnose → Act. ... The integration of artificial intelligence and DevOps signifies an important change in the way modern IT systems are built, managed, and evolved. As we have discussed here, AIOps is not just an extension of a type of automation — it is changing the way operations are modelled from reactive to intelligent, self-healing ecosystems.


Building a product roadmap: From high-level vision to concrete plans

A roadmap provides the anchor to keep everyone aligned amid constant flux. Yet many organizations still treat roadmaps as static artifacts — a one-and-done exercise intended to appease executives or investors. That’s a mistake. The most effective roadmaps are living documents evolving with the product and market realities. ... If strategy defines direction, milestones are the engine that keeps the train moving. Too often, teams treat milestones as arbitrary checkpoints or internal deadlines. Done right, these can become powerful tools for motivation, alignment and storytelling. ... The best roadmaps aren’t written by PMs — they’re co-authored by teams. That’s why I advocate for bottom-up collaboration anchored in executive alignment. Before any roadmap offsite, sync with the CEO or leadership team. Understand what they care about and why. If they disagree with priorities, resolve those conflicts early. Then bring that context into a team workshop. During the session, identify technical leads — those trusted voices who can translate into action. Encourage them to pre-think tradeoffs and dependencies before the group session. ... The perfect roadmap doesn’t exist and that’s the point. Remember, the goal isn’t to build a flawless plan, but a resilient one. As President Dwight D. Eisenhower said, “Plans are useless, but planning is indispensable.” ... Vision without execution is hallucination. But execution without vision is chaos. The magic of product leadership lies in balancing both: crafting a roadmap that’s both inspiring and achievable.


Scattered network data impedes automation efforts

As IT organizations mature their network automation strategies, it’s becoming clear that network intent data is an essential foundation. They need reliable documentation of network inventory, IP address space, topology and connectivity, policies, and more. This requirement often kicks off a network source of truth (NSoT) project, which involves network teams discovering, validating, and consolidating disparate data in a tool that can model network intent and provide programmatic access to data for network automation tools and other systems. ... IT leaders do not understand the value of NSoT solutions. The data is already available, although it’s scattered and of dubious quality. Why should we spend money on a product or even extra engineers to consolidate it? “Part of the issue is that we’ve got leadership that are not infrastructure people,” said a network engineer with a global automobile manufacturer. “It’s kind of a heavy lift to get them to buy into it, because they see that applications are running fine over the network. ‘Why do I need to spend money on this is?’ And we tell them that the network is running fine, but there will be failures at some point and it’s worth preventing that.” ... NSoT isn’t a magic bullet for solving the problems IT organizations have with poor network documentation and scattered operational data. Network engineering teams will need to discover, validate, reconcile, and import data from multiple repositories. This process can be challenging and time-consuming. Some of this data will difficult to find. 


What insurers expect from cyber risk in 2026

Cyber insurers are beginning to use LLMs to translate internet scale data into structured inputs for underwriting and portfolio analysis. These applications target specific pain points such as data gaps and processing delays. Broader change across pricing or risk selection remains gradual. ... AI supported workflows begin to reduce repetitive tasks across those stages. Automation supports data entry, document review, and routine verification. Human oversight remains central for judgment based decisions. The research links this shift to measurable operational effects. Fewer manual touches per claim reduce processing time and error rates. Claims teams gain capacity without proportional increases in staffing. ... Age verification and online safety legislation introduce unintended cyber risk. Requirements that reduce online anonymity create high value identity datasets that attract attackers. The research highlights rising exposure to identity based coercion, insider compromise, and extortion. Once personal identity data is leaked, attackers gain leverage that can translate into access to corporate systems. This dynamic supports long term campaigns by organized groups and state aligned actors. ... Data orchestration becomes a core capability. Insurers and reinsurers integrate signals including security posture, threat activity, and loss experience into shared models. Consistent views across teams and regions support portfolio governance. This shift places emphasis on actionability. Data value depends on timing and relevance within workflows rather than volume alone. 


Human + AI Will Define the Future of Work by 2027: Nasscom-Indeed Report

This emerging model of Humans + AI working together is reported as the next phase of transformation, where success depends on how effectively AI will augment human capabilities, empower employees, and align with organizational purpose. The report highlights that the most effective human–AI partnerships are emerging across higher-order activities such as scope definition, system architecture, and data model design. At the same time, more routine and repeatable tasks, including boilerplate code generation and unit test creation, are expected to be increasingly automated by AI over the next two to three years. ... To stay relevant in a Human + AI workplace, the report emphasizes that individuals should build capability, adaptability, and continuous learning. This includes experience with using AI tools (prompting, critical review of output, combining AI speed with human judgment), moving up the value chain (e.g., developers from coding to architecture thinking), building multidisciplinary skills (tech + domain + professional skills), and focusing on outcomes over credentials by creating repositories of work samples showing measurable impact. ... Organizations have already started taking measures to address these challenges. Every seven in ten HR leaders are focusing on upskilling, more than half focusing on modernizing systems. With respect to AI adoption, 79% prioritize internal reskilling as a dominant strategy. 


From vulnerability whack-a-mole to strategic risk operations

“Software bills of materials are just an ingredients list,” he notes. “That’s helpful because the idea is that through transparency we will have a shared understanding. The problem is that they don’t deliver a shared understanding because the expectation of anyone in security who reads the SBOM is the first job they’ll do is run those versions against vulnerability databases.” This creates a predictable problem: security teams receive SBOMs, scan them for vulnerabilities, and generate alerts for every CVE match, regardless of whether those vulnerabilities actually affect the product. ... To make SBOMs truly useful, Kreilein introduces VEX (Vulnerability Exploitability Exchange), an open standards framework that addresses the context problem. VEX provides four status messages: affected, not affected, under investigation, and fixed. “What we want to start doing is using a project called VEX that gives four possible status messages,” Kreilein explains. ... Developers aren’t refusing to patch because they don’t care about security. They’re worried that upgrading a component will break the application. “If my application is brittle and can’t take change, I cannot upgrade to the non-vulnerable version,” Kreilein explains. “If I don’t have effective test automation and integration and unit testing, I can’t guarantee that this upgrade won’t break the application.” This reframing shifts the security conversation from compliance and mandates to engineering fundamentals. Better test coverage, better reference architectures, and better secure-by-design practices become security initiatives.


AI backlash forces a reality check: humans are as important as ever

Companies are now moving beyond the hype and waking up to the consequences of AI slop, underperforming tools, fragmented systems, and wasted budgets, said Brooke Johnson, chief legal officer at Ivanti. “The early rush to adopt AI prioritized speed over strategy, leaving many organizations with little to show for their investments,” Johnson said. Organizations now need to balance AI, workforce empowerment and cybersecurity at the same they’re still formulating strategies. That’s where people come in. ... AI is becoming less a tech problem and more of an adoption hurdle, Depa said. “What we’re seeing now more and more is less of a technology challenge, more of a change management, people, and process challenge — and that’s going to continue as those technologies continue to evolve,” he said. DXC Technology is taking a similar approach, designing tools where human insight, judgment, and collaboration create value that AI can’t do alone, said Dan Gray, vice president of global technical customer operations at the company. ... Companies might have to accept underutilizing some of the AI gains in the near term. AI could help workers complete their tasks in half the time and enjoy a leisurely pace. Alternately, employees might burn out quickly by getting more work. “If you try to lay them off, you don’t have a good workforce left. If you let them be, why are you paying them? So that’s a paradox,” Seth said.


Physical AI is the next frontier - and it's already all around you

Physical AI can be generally defined as AI implemented in hardware that can perceive the world around it and then reason to perform or orchestrate actions. Popular examples including autonomous vehicles and robots -- but robots that utilize AI to perform tasks have existed for decades. So what's the difference? ... Saxena adds that while humanoid robots will be useful in instances where humans don't want to perform a task, either because it is too tedious or too risky, they will not replace humans. That's where AI wearables, such as smart glasses, play an important role, as they can augment human capabilities. But beyond that, AI wearables might actually be able to feed back into other physical AI devices, such as robots, by providing a high-quality dataset based on real-life perspectives and examples. "Why are LLMs so great? Because there is a ton of data on the internet, for a lot of the contextual information and whatnot, but physical data does not exist," said Saxena. ... Given the privacy concerns that may come from having your everyday data used to train robots, Saxena highlighted that the data from your wearables should always be kept at the highest level of privacy. As a result, the data -- which should already be anonymized by the wearable company -- could be very helpful in training robots. That robot can then create more data, resulting in a healthy ecosystem. "This sharing of context, this sharing of AI between that robot and the wearable AI devices that you have around you is, I think, the benefit that you are going to be able to accrue," added Asghar.


Unlocking the Power of Geospatial Artificial Intelligence (GeoAI)

GeoAI is more than sophisticated map analytics. It is a strategic technology that blends AI with the physical world, allowing tech experts to see, understand, and act on patterns that were previously invisible. From planning sustainable cities to protecting wildlife, it’s helping experts tackle significant challenges with precision and speed. As the world generates more location-based data every day, GeoAI is becoming a must-have tool. It’s not just tech – it’s a way to make the world work better. ... To make it simpler. Machine learning spots trends, computer vision interprets images, GIS organizes it all, and knowledge graphs tie it together. The result? GeoAI can take a chaotic pile of data and deliver clear answers, like telling a city where to build a new park or warning about a wildfire risk. It’s a powerhouse that’s making location-based decisions faster and smarter. In all, GeoAI is transforming the speed at which we extract meaning from complex datasets, thereby enabling us to address the Earth’s most pressing challenges. ... Though powerful, GeoAI is not without challenges. Effective implementation requires careful attention to data privacy, technical infrastructure, and organizational change management. ... Leaders who take GeoAI seriously stand to gain more than just incremental improvements. With the right systems in place, they can respond faster, make smarter decisions, and get better results from every field team in the network. 


For application security: SCA, SAST, DAST and MAST. What next?

If you think SAST and SCA are enough, you’re already behind. The future of app security is posture, provenance and proof, not alerts. ... Posture is the ‘what.’ Provenance is the ‘how’. The SLSA framework gives us a shared vocabulary and verifiable controls to prove that artifacts were built by hardened, tamper‑resistant pipelines with signed attestations that downstream consumers can trust. When I insist on SLSA Level 2 for most services and Level 3 for critical paths, I am not chasing compliance theater; I am buying integrity that survives audit and incident. Proof is where SBOMs finally grow up. Binding SBOM generation to the build that emits the deployable bits, signing them and validating at deploy time moves SBOMs from “ingredient lists” to enforceable controls. The CNCF TAG‑Security best practices v2 paper is my practical map, personas, VEX for exploitability, cryptographic verification to ensure tests actually ran, and prescriptive guidance for cloud‑native factories. ... Among the nexts, AI is the most mercurial. NIST’s final 2025 guidance on adversarial ML split threats across PredAI and GenAI and called out prompt injection in direct and indirect form as the dominant exploit in agentic systems where trusted instructions co mingle with untrusted data. The U.S. AI Safety Institute published work on agent hijacking evaluations, which I treat as required red‑team reading for anyone delegating actions to tools.

Daily Tech Digest - November 30, 2025


Quote for the day:

"The real leader has no need to lead - he is content to point the way." -- Henry Miller



Four important lessons about context engineering

Modern LLMs operate with context windows ranging from 8K to 200K+ tokens, with some models claiming even larger windows. However, several technical realities shape how we should think about context. ... Research has consistently shown that LLMs experience attention degradation in the middle portions of long contexts. Models perform best with information placed at the beginning or end of the context window. This isn’t a bug. It’s an artifact of how transformer architectures process sequences. ... Context length impacts latency and cost quadratically in many architectures. A 100K token context doesn’t cost 10x a 10K context, it can cost 100x in compute terms, even if providers don’t pass all costs to users. ... The most important insight: more context isn’t better context. In production systems, we’ve seen dramatic improvements by reducing context size and increasing relevance. ... LLMs respond better to structured context than unstructured dumps. XML tags, markdown headers, and clear delimiters help models parse and attend to the right information. ... Organize context by importance and relevance, not chronologically or alphabetically. Place critical information early and late in the context window. ... Each LLM call is stateless. This isn’t a limitation to overcome, but an architectural choice to embrace. Rather than trying to maintain massive conversation histories, implement smart context management


What Fuels AI Code Risks and How DevSecOps Can Secure Pipelines

AI-generated code refers to code snippets or entire functions produced by Machine Learning models trained on vast datasets. While these models can enhance developer productivity by providing quick solutions, they often lack the nuanced understanding of security implications inherent in manual coding practices. ... Establishing secure pipelines is the backbone of any resilient development strategy. When code flows rapidly from development to production, every step becomes a potential entry point for vulnerabilities. Without careful controls, even well-intentioned automation can allow flawed or insecure code to slip through, creating risks that may only surface once the application is live. A secure pipeline ensures that every commit, every integration, and every deployment undergo consistent security scrutiny, reducing the likelihood of breaches and protecting both organizational assets and user trust. Security in the pipeline begins at the earliest stages of development. By embedding continuous testing, teams can identify vulnerabilities before they propagate, identifying issues that traditional post-development checks often miss. This proactive approach allows security to move in tandem with development rather than trailing behind it, ensuring that speed does not come at the expense of safety. 


The New Role of Enterprise Architecture in the AI Era

Traditional architecture assumes predictability in which once the code has shipped, systems behave in a standard way. On the contrary, AI breaks that assumption completely, given that the machine learning models continuously change as data evolves and model performance keeps fluctuating as every new dataset gets added. ... Architecture isn’t just a phase in the AI era; rather it’s a continuous cycle that must operate across various interconnected stages that follow well-defined phases. This process starts with discovery, where the teams assess and identify AI opportunities that are directly linked to the business objectives. Engage early with business leadership to define clear outcomes. Next comes design, where architects create modular blueprints for data pipelines and model deployment by reusing the proven patterns. In the delivery phase, teams execute iteratively with governance built in from the onset. Ethics, compliance and observability should be baked into the workflows, not added later as afterthoughts. Finally, adaptation keeps the system learning. Models are monitored, retrained and optimized continuously, with feedback loops connecting system behavior back to business metrics and KPIs (key performance indicators). When architecture operates this way, it becomes a living ecosystem that learns, adapts and improves with every iteration.


Quenching Data Center Thirst for Power Now Is Solvable Problem

“Slowing data center growth or prohibiting grid connection is a short-sighted approach that embraces a scarcity mentality,” argued Wannie Park, CEO and founder of Pado AI, an energy management and AI orchestration company, in Malibu, Calif. “The explosive growth of AI and digital infrastructure is a massive engine for economic, scientific, and industrial progress,” he told TechNewsWorld. “The focus should not be on stifling this essential innovation, but on making data centers active, supportive participants in the energy ecosystem.” ... Planning for the full lifecycle of a data center’s power needs — from construction through long-term operations — is essential, he continued. This approach includes having solutions in place that can keep facilities operational during periods of limited grid availability, major weather events, or unexpected demand pressures, he said. ... The ITIF report also called for the United States to squeeze more power from the existing grid without negatively impacting customers, while also building new capacity. New technology can increase supply from existing transmission lines and generators, the report explained, which can bridge the transition to an expanded physical grid. On the demand side, it added, there is spare capacity, but not at peak times. It suggested that large users, such as data centers, be encouraged to shift their demand to off-peak periods, without damaging their customers. Grids do some of that already, it noted, but much more is needed.


A Waste(d) Opportunity: How can the UK utilize data center waste heat?

Walking into the data hall, you are struck by the heat resonating from the numerous server racks, each capable of handling up to 20kW of compute. However, rather than allowing this heat to dissipate into the atmosphere, the team at QMUL had another plan. Instead, in partnership with Schneider Electric, the university deployed a novel heat reuse system. ... Large water cylinders across campus act like thermal batteries, storing hot water overnight when compute needs are constant, but demand is low, then releasing it in the morning rush. As one project lead put it, there is “no mechanical rejection. All the heat we generate here is used. The gas boilers are off or dialed down - the computing heat takes over completely.” At full capacity, the data center could supply the equivalent of nearly 4 million ten-minute showers per year. ... Walking out, it’s easy to see why Queen Mary’s project is being held up as a model for others. In the UK, however, the project is somewhat of an oddity, but through the lens of QMUL you can see a glimpse of the future, where compute is not only solving the mysteries of our universe but heating our morning showers. The question remains, though, why data center waste heat utilization projects in the UK are few and far between, and how the country can catch up to regions such as the Nordics, which has embedded waste heat utilization into the planning and construction of its data center sector.


Redefining cyber-resilience for a new era

The biggest vulnerability is still the human factor, not the technology. Many companies invest in expensive tools but overlook the behaviour and mindset of their teams. In regions experiencing rapid digital growth, that gap becomes even more visible. Phishing, credential theft and shadow IT remain common ways attackers gain access. What’s needed is a shift in culture. Cybersecurity should be seen as a shared responsibility, embedded in daily routines, not as a one-time technical solution. True resilience begins with awareness, leadership and clarity at all levels of the organisation. ... Leaders play a crucial role in shaping that future. They need to understand that cybersecurity is not about fear, but about clarity and long-term thinking. It is part of strategic leadership. The leaders who make the biggest impact will be the ones who see cybersecurity as cultural, not just technical. They will prioritise transparency, invest in ethical and explainable technology, and build teams that carry these values forward. ... Artificial Intelligence is already transforming how we detect and respond to threats, but the more important shift is about ownership. Who controls the infrastructure, the models and the data? Centralised AI, controlled by a few major companies, creates dependence and limits transparency. It becomes harder to know what drives decisions, how data is used and where vulnerabilities might exist.


Building Your Geopolitical Firewall Before You Need One

In today’s world, where regulators are rolling out data sovereignty and localization initiatives that turn every cross-border workflow into a compliance nightmare, this is no theoretical exercise. Service disruption has shifted from possibility to inevitability, and geopolitical moves can shut down operations overnight. For storage engineers and data infrastructure leaders, the challenge goes beyond mere compliance – it’s about building genuine operational independence before circumstances force your hand. ... The reality is messier than any compliance framework suggests. Data sprawls everywhere, from edge, cloud and core to laptops and mobile devices. Building walls around everything does not offer true operational independence. Instead, it’s really about having the data infrastructure flexibility to move workloads when regulations shift, when geopolitical tensions escalate, or when a foreign government’s legislative reach suddenly extends into your data center. ... When evaluating sovereign solutions, storage engineers typically focus on SLAs and certifications. However, Oostveen argues that the critical question is simpler and more fundamental: who actually owns the solution or the service provider? “If you’re truly sovereign, my view is that you (the solution provider) are a company that is owned and operated exclusively within the borders of that particular jurisdiction,” he explains.


The 5 elements of a good cybersecurity risk assessment

Companies can use a cybersecurity risk assessment to evaluate how effective their security measures are. This provides a foundation for deciding which security measures are important — and which are not. But also for deciding when a product or system is secure enough and additional measures would be excessive. When they’ve done enough cybersecurity. However, not every risk assessment fulfills this promise. ... Too often, cybersecurity risk assessments take place solely in cyberspace — but this doesn’t allow meaningful prioritizing of requirements. “Server down” is annoying, but cyber systems never exist for their own sake. That’s why risk assessments need a connection to real processes that are mission critical for the organization — or perhaps not. ... Without system understanding, there is no basis for attack modeling. Without attack modeling, there is no basis for identifying the most important requirements. It shouldn’t really be cybersecurity’s job to create system understanding. But since there is often a lack of documentation in IT, OT, or for cyber systems in general, cybersecurity is often left to provide it. And if cybersecurity is the first team to finally create an overview of all cyber systems, then it’s a result that is useful far beyond security risk assessment. ... Attack scenarios are a necessary stepping stone to move your thinking from systems and real-world impacts to meaningful security requirements — no more and no less. 


Finding Strength in Code, Part 2: Lessons from Loss and the Power of Reflection

Every problem usually has more than one solution. The engineers who grow the fastest are the ones who can look at their own mistakes without ego, list what they’re good at and what they're not, and then actually see multiple ways forward. Same with life. A loss (a pet, a breakup, whatever) is a bug that breaks your personal system. ... Solo debugging has limits. On sprawling systems, we rally the squad—frontend, backend, QA—to converge faster. Similarly, grief isn't meant for isolation. I've leaned on my network: a quick Slack thread with empathetic colleagues or a vulnerability share in my dev community. It distributes the load and uncovers blind spots you might miss on your own. ... Once a problem is solved, it is essential to communicate the solution. The list of lessons from that solution: some companies solve problems, but never put the effort into documenting the process in a way that prevents them from happening again. I know it is impossible to avoid problems, as it is impossible not to make mistakes in our lives. The true inefficiency? Skipping the "why" and "how next time." ... Borrowed from incident response, it's a structured debrief that prevents recurrence without finger-pointing. In engineering, it ensures resilience; in life, it builds emotional antifragility. There are endless flavours of postmortems—simple Markdown outlines to full-blown docs—but the gold standard is "blameless," focusing on systems over scapegoats.


Cyber resilience is a business imperative: skills and strategy must evolve

Cyber upskilling must be built into daily work for both technical and non-technical employees. It’s not a one-off training exercise; it’s part of how people perform their roles confidently and securely. For technical teams, staying current on certifications and practicing hands-on defense is essential. Labs and sandboxes that simulate real-world attacks give them the experience needed to respond effectively when incidents happen. For everyone else, the focus should be on clarity and relevance. Employees need to understand exactly what’s expected of them; how their individual decisions contribute to the organization's resilience. Role-specific training makes this real: finance teams need to recognize invoice fraud attempts; HR should know how to handle sensitive data securely; customer service needs to spot social engineering in live interactions. ... Resilience should now sit alongside financial performance and sustainability as a core board KPI. That means directors receiving regular updates not only on threat trends and audit findings, but also on recovery readiness, incident transparency, and the cultural maturity of the organization's response. Re-engaging boards on this agenda isn’t about assigning blame—it’s about enabling smarter oversight. When leaders understand how resilience protects trust, continuity, and brand, cybersecurity stops being a technical issue and becomes what it truly is: a measure of business strength.