Showing posts with label serverless computing. Show all posts
Showing posts with label serverless computing. Show all posts

Daily Tech Digest - December 27, 2025


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



Leading In The Age Of AI: Five Human Competencies Every Modern Leader Needs

Leaders are surrounded by data, metrics and algorithmic recommendations, but decision quality depends on interpretation rather than volume. Insight is the ability to turn information and diverse perspectives into clarity. It requires curiosity, patience and the humility to question assumptions. Leaders who demonstrate this capability articulate complex issues clearly, invite dissent before deciding and translate analysis into meaningful direction. ... Integration is the capability to design environments where human creativity and machine intelligence reinforce one another. Leaders strong in this capability align technology with purpose and culture, encourage experimentation and ensure that tools enhance human capability rather than replacing reflection and judgment. The aim is capability at scale, not efficiency at any cost. ... Inspiration is the ability to energize people by helping them see what is possible and how their work contributes to a larger purpose. It is grounded optimism rather than polished enthusiasm. Leaders who inspire use story, clarity and authenticity to create shared commitment rather than simple compliance. When purpose becomes personal, contribution follows. ... It is not only about speed or quarterly numbers. It is about sustainable value for people, organizations and society. Leaders strong in this capability balance performance with well-being and growth, adapt strategy based on real feedback and design systems that strengthen capacity over time instead of exhausting it.


Big shifts that will reshape work in 2026

We’re moving into a new chapter where real skills and what people can actually do matter more than degrees or job titles. In 2026, this shift will become the standard across organisations in APAC. Instead of just looking for certificates, employers are now keen to find people who can show adaptability, pick up new things quickly, and prove their expertise through action. ... as helpful as AI can be, there’s a catch. Technology can make things faster and smarter, but it’s not a substitute for the human touch—creativity, empathy, and making the right call when it matters. The real test for leaders will be making sure AI helps people do their best work, not strip away what makes us human. That means setting clear rules for how AI is used, helping employees build digital skills, and keeping trust at the centre of it all. Organisations that succeed will strike a balance: leveraging AI’s analytical power to unlock efficiencies, while empowering people to focus on the relational, imaginative, and moral dimensions of work. ... Employee wellbeing is set to become the foundation of the future of work. No longer a peripheral benefit or a box to check, wellbeing will be woven into organisational culture, shaping every aspect of the employee experience. ... Purpose is emerging as the new currency of talent attraction and retention, particularly for Gen Z and millennials, who are steadfast in their desire to work for organisations that reflect their personal values. 


How AI could close the education inequality gap - or widen it

On one side are those who say that AI tools will never be able to replace the teaching offered by humans. On the other side are those who insist that access to AI-powered tutoring is better than no access to tutoring at all. The one thing that can be agreed on across the board is that students can benefit from tutoring, and fair access remains a major challenge -- one that AI may be able to smooth over. "The best human tutors will remain ahead of AI for a long time yet to come, but do most people have access to tutors outside of class?" said Mollick. To evaluate educational tools, Mollick uses what he calls the "BAH" test, which measures whether a tool is better than the best available human a student can realistically access. ... AI tools that function like a tutor could also help students who don't have the resources to access a human tutor. A recent Brookings Institution report found that the largest barrier to scaling effective tutoring programs is cost, estimating a requirement $1,000 to $3,000 per student annually for high-impact models. Because private tutoring often requires financial investment, it can drive disparities in educational achievement. Aly Murray experienced those disparities firsthand. Raised by a single mother who immigrated to the US from Cuba, Murray grew up as a low-income student and later recognized how transformative access to a human tutor could have been. 


Shift-Left Strategies for Cloud-Native and Serverless Architectures

The whole architectural framework of shift-left security depends on moving critical security practices earlier in the development lifecycle. Incorporating security in the development lifecycle should not be an afterthought. Within this context, teams are empowered to identify and eliminate risks at design time, build time, and during CI/CD — not after. These modern workloads are highly dynamic and interconnected, and a single mishap can trickle down across the entire environment. ... Serverless Functions can introduce issues if they run with excessive privileges. This can be addressed by simply embedding permissions checks early in the development lifecycle. A baseline of minimum required identity and access management (IAM) privileges should be enforced to keep development tight. Wildcards or broad permissions should be leveraged in this context. Also, it makes sense to use runtime permission boundary generation — otherwise, functions can be compromised without appropriate safeguards. ... In modern-day cloud environments, it is crucial that observability is considered a major priority. Shifting left within the context of observability means logs, metrics, traces, and alerts are integrated directly into the application from day one. AWS CloudWatch or DataDog metrics can be integrated into the application code so that developers can keep an eye on the critical behaviors of the application. 


Agentic AI and Autonomous Agents: The Dawn of Smarter Machines

At their core, agentic AI and autonomous agents rely on a few powerhouse components: planning, reasoning, acting, and tool integration. Planning is the blueprint phase the AI breaks a goal into subtasks, like mapping out a road trip with stops for gas and sights. Reasoning kicks in next, where it evaluates options using logic, past data, or even ethical guidelines (more on that later). Acting is the execution: interfacing with the real world via APIs, databases, or even physical robots. And tool integration?  ... Diving deeper, it’s worth comparing agentic AI to other paradigms to see why it’s a game-changer. Standalone LLMs, like basic GPT models, are fantastic for generating text but falter on execution — they can’t “do” things without external help. Agentic systems bridge that by embedding action loops. Multi-agent setups take it further: Imagine a team of specialized agents collaborating, one for research, another for analysis, like a virtual task force. ... Looking ahead, the future of agentic AI feels electric yet cautious. By 2030, I predict multi-agent collaborations becoming standard, with advancements in human-in-the-loop designs to mitigate ethics pitfalls — like ensuring transparency in decision-making or preventing job displacement. OpenAI’s push for standardized frameworks addresses this, but we must grapple with questions: Who owns the data agents learn from? How do we audit autonomous actions? 


Operationalizing Data Strategy with OKRs: From Vision to Execution

For any business, some of the most critical data-driven initiatives and priorities include risk mitigation, revenue growth, and customer experience. To drive more effectiveness and accuracy in such business functions, finding ways to blend the technical output and performance data with tangible business outcomes is important. You must also proactively assess the shortcomings and errors in your data strategy to identify and correct any misaligned priorities. ... OKRs can empower data teams to leverage analytics and data sources to deliver highly actionable, timely insights. Set measurable and time-bound objectives to ensure focus and drive tangible progress toward your goals by leveraging an OKR platform, creating visually appealing dashboards, and assigning accountability to employees. ... If your high-level vision is “to become a data-driven organization,” the most effective way to work toward it is to break it into specific and measurable objectives. More importantly, consider segmenting your core strategy into multiple use cases, like operations optimization, customer analytics, and regulatory compliance. With these easily trackable segments, improve your focus and enable your teams to deliver incremental value. ... By tying OKRs with processes like governance and quality, you can ensure that they become measurable and visible priorities, causing fewer incidents and building confidence in analytics-based projects and processes.


This tiny chip could change the future of quantum computing

At the heart of the technology are microwave-frequency vibrations that oscillate billions of times per second. These vibrations allow the chip to manipulate laser light with remarkable precision. By directly controlling the phase of a laser beam, the device can generate new laser frequencies that are both stable and efficient. This level of control is a key requirement not only for quantum computing, but also for emerging fields such as quantum sensing and quantum networking. ... The new device generates laser frequency shifts through efficient phase modulation while using about 80 times less microwave power than many existing commercial modulators. Lower power consumption means less heat, which allows more channels to be packed closely together, even onto a single chip. Taken together, these advantages transform the chip into a scalable system capable of coordinating the precise interactions atoms need to perform quantum calculations. ... The researchers are now working on fully integrated photonic circuits that combine frequency generation, filtering, and pulse shaping on a single chip. This effort moves the field closer to a complete, operational quantum photonic platform. Next, the team plans to partner with quantum computing companies to test these chips inside advanced trapped-ion and trapped-neutral-atom quantum computers.


The 5-Step Framework to Ensure AI Actually Frees Your Time Instead of Creating More Work

Success with AI isn’t measured by the number of automations you have deployed. True AI leverage is measured by the number of high-value tasks that can be executed without oversight from the business owner. ... Map what matters most — It’s critical to focus your energy on where it matters the most. Look through your processes to identify bottlenecks and repetitive decisions or tasks that don’t need your input. ... Design roles before rules — Figure out where you need human ownership in your processes. These will be activities that require traits like empathy, creative thinking and high-level strategy. Once the roles are established, you can build automation that supports those roles. ... Document before you delegate — Both humans and machines need clear direction. Be sure to document any processes, procedures, and SOPs before delegating or automating them. ... Automate boring and elevate brilliant — Your primary goal with automation is to free up your time for creating, strategy and building relationships. Of course, the reality is that not everything should be automated. ... Measure output, not inputs — Too many entrepreneurs spend their time focused on what their team and AI agents are doing and not what they are achieving. Intentional automation requires placing your focus on outputs to ensure the processes you have in place are working effectively, or where they can be improved. 


The next big IT security battle is all about privileged access

As the space matures, privileged access workflows will increasingly depend on adaptive authentication policies that validate identity and device posture in real time. Vendors that offer flexible passwordless frameworks and integrations with existing IAM and PAM systems will see increased market traction. This will mark a shift in the promised end of passwords, eliminating one of the most exploited attack vectors in privilege abuse and account takeovers. ... Instead of relying solely on human auditors or predefined rules, IAM/PAM solutions will use generative AI to summarize risky session activities, detect lateral movement indicators, and suggest remediations in real time. AI-assisted security will make privileged access oversight continuous and contextual, helping enterprises detect insider threats and compromised accounts faster than ever before. This will also move the industry toward autonomous access governance. ... Compromised privileged credentials will remain the single most direct path to data loss, and a sharp rise in targeted breaches, ransomware campaigns, and supply-chain intrusions involving administrative accounts will elevate IAM/PAM to a board-level concern in 2026. Enterprises will accelerate investments in vendor privileged access tools to mitigate risk from contractors, managed service providers, and external support staff.


Mentorship and Diversity: Shaping the Next Generation of Cyber Experts

For those considering a career in cybersecurity, Voight's advice is both practical and inspiring: follow your passion and embrace the industry's constant evolution. Whether you're starting in security operations or exploring niche areas like architecture and engineering, the key is to stay curious and committed to learning. As artificial intelligence and automation reshape the field, Voight remains optimistic, assuring that human expertise will always be essential, encouraging aspiring professionals to dive into a field brimming with opportunity, innovation, and the chance to make a meaningful impact. ... Cybersecurity is fascinating and offers many paths of entry. You don't necessarily need a specific academic program to get involved. The biggest piece is having a passion for it. The more you love learning about this industry, the better it will be for you in the long run. It's something you do because you love it. ... Sometimes, it's the people and teams you work with that make the job exciting. You want to be doing something new and exciting, something you can embrace and contribute to. Keep an open mind to all the different paths. There isn't one direct path, and not everyone will become a Chief Information Security Officer (CISO). Being a CISO may not be the role everyone imagines it to be when considering the responsibilities involved.

Daily Tech Digest - October 10, 2025


Quote for the day:

“Whether you think you can or you think you can’t, you’re right.” -- Henry Ford



Has the value of data increased?

“We’ve seen that AI’s true potential is unlocked by connecting trusted, governed data – structured and unstructured – with real-time analytics and decision intelligence. With the rise of agentic AI, the next wave of value creation will come from intelligent systems that don’t just interpret data, but continuously and autonomously act on it at scale. Put simply, AI isn’t a shortcut to insight – it’s a multiplier of value, if the data is ready. Enterprises that treat data as an afterthought will fall behind, while those that treat it as a strategic asset will lead,” added the Qlik CSO. ... “In this AI economy, compute power may set the pace, but data sets the ceiling. MinIO raises that ceiling, transforming scattered, hard-to-reach datasets into a living, high-performance fabric that fuels every AI prompt and initiative. With MinIO AIStor, organizations gain the ability to store and understand. Data that is secure, fluid, and always ready for action is a competitive weapon,” added Kapoor. ... “Data that is fresh, well described and policy aware beats bigger but blind datasets because it can be safely composed, reused and measured for impact, with the lineage to show teams what to trust and what to fix so they can ship faster,” said Neat. ... While there is no question, really, of whether the value of data has increased and, further, whether the proliferation of AI has been fundamental to that value escalation, the mechanics as variously described here should point us towards the new wave of emerging truths in this space.


Whose Ops is it Anyway? How IDPs, AI and Security are Evolving Developer Culture

For many teams, the problem is not a lack of enthusiasm or ambition but a shortage of resources and skills. They want to automate more, streamline workflows, and adopt new practices, yet often find themselves already operating at full capacity just in keeping existing systems running. In that environment, the slightest of steps toward more advanced automation strategies can feel like a big leap forward. ... On the security side, the logic behind DevSecOps is compelling. More companies are realising that security has to be baked in from day one, not bolted on later. The difficulty lies in making that shift a practical reality, as integrating security checks early in the pipeline often requires new tooling, changes to established workflows, and in some cases, rethinking the roles and responsibilities within the team. ... In many organisations, it is the existing DevOps or platform teams that are best positioned to take on this responsibility, extending their remit into what is often referred to as MLOps. These teams already have experience building and maintaining shared infrastructure, managing pipelines, and ensuring operational stability at scale, so expanding those capabilities to handle data science and machine learning workflows can feel like a natural evolution. ... That said, as adoption grows, we can also expect to see more specialised MLOps roles appearing, particularly in larger enterprises or in organisations where AI is a major strategic focus.


The ultimate business resiliency test: Inside Kantsu’s ransomware response

Kantsu then began collaborating with the police, the cyberattack response teams of the company’s insurers, and security specialists to confirm the scope of cyber insurance coverage and estimate the amount of damage. ... when they began the actual recovery work, they encountered an unexpected pitfall. “We considered how to restore operations as quickly as possible. We did a variety of things, including asking other companies in the same industry to send packages, even ignoring our own profits,” Tatsujo says. ... To prevent reinfection with ransomware, the company prohibited use of old networks and PCs. Tethering was used, with smartphones as Wi-Fi routers. Where possible, this was used to facilitate shipping. New PCs were purchased to create an on-premises environment. ... “In times of emergency like this, the most important thing is cash to recover as quickly as possible, rather than cost reduction. However, insurance companies do not pay claims immediately. ... “In the end, many customers cooperated, which made me really happy. Rakuten Ichiba, in particular, offers a service called ‘Strongest Delivery,’ which allows for next-day delivery and delivery time specification, but they were considerate enough to allow us a grace period in consideration of the delay in delivery,” says President Tatsujo.


Stablecoins: The New Currency of Online Criminals

Practitioners say a cluster of market and technical factors are making stablecoins the payment of choice for cybercriminals and fraudsters. "It's not just the dollar peg that makes stablecoins attractive," said Ari Redbord, vice president and global head of policy and government affairs at TRM Labs. "Liquidity is critical. There are deep pools of stablecoin liquidity on both centralized and decentralized platforms. Settlement speed and irreversibility are also appealing for criminals trying to move large sums quickly," he told Information Security Media Group. The perception of stability - knowing $1 today will likely be $1 tomorrow - often suffices for illicit actors, regardless of an issuer's exact collateral model, he said. This stability and on-chain plumbing create both opportunity and exposure. Redbord said the spike in stablecoin usage is partly because law enforcement agencies around the world have become "exceptionally effective at tracing and seizing bitcoin," and criminals "go where the liquidity and usability are." There is no technical attribute of stablecoins that makes them more appealing to criminals or harder to trace, compared to other cryptocurrencies, Koven said. In practice, public ledgers keep transfers visible; the question then becomes whether investigators have the right tools and the cooperation of the ecosystem's gatekeepers to follow value across chains.


Zero Trust cuts incidents but firms slow to adopt AI security

Zero Trust is increasingly viewed as the standard going forward. As AI-driven threats accelerate, organisations must evaluate security holistically across identity, devices, networks, applications, and data. At DXC, we're helping customers embed Zero Trust into their culture and technology to safeguard operations. Our end-to-end expertise makes it possible to both defend against AI threats and harness secure AI in the same decisive motion. ... New cybersecurity threats are the primary driver for updating Zero Trust frameworks, with 72% of respondents indicating that the evolving threat landscape pushes them to continuously upgrade policies and practices. In addition, more than half of responding organisations recognised improvements in user experience as a secondary benefit of adopting Zero Trust approaches, beyond the gains in security posture. ... Most enterprises already rely on Microsoft Entra ID and Microsoft 365 as the backbone of their IT environments. Building Zero Trust solutions alongside DXC extends that value, enabling tighter integration, simplified operations, and greater visibility and control. By consolidating around the Microsoft stack, organisations can reduce complexity, cut costs, and accelerate their Zero Trust journey. ... Participants in the study agreed that Zero Trust is not a project with a defined end point. Instead, it is an ongoing process that requires continuous monitoring, regular updates, and cultural adaptation.


Overcome Connectivity Challenges for Edge AI

The challenges of AI at the Edge are as large as the advantages, however. One of the biggest challenges and key enablement technologies is connectivity. Edge processing and AI at the Edge require reliability, low latency, and resiliency in the harshest of environments. Without good connections to the network, many of the advantages of Edge AI are diminished, or lost entirely. A truly rugged Edge AI system requires a dual focus on connectivity, according to the experts at ATTEND. It needs both robust external I/O to interface with the outside world, and high-speed, resilient internal interconnects to manage data flow within the computing module. ... The transition to Edge AI is not just a software challenge; it is a hardware and systems engineering challenge. The key to overcoming this dual challenge is to engage with a partner like ATTEND, who will understand that the reliability of an advanced AI model is ultimately dependent on the physical-layer components that capture and transmit its data. By offering a comprehensive portfolio that addresses connectivity from the external sensor to the internal processor module, ATTEND can help you to build end-to-end systems that are both powerful and resilient. To meet with ATTEND and see all that they are doing to advance and enable true intelligence at the Edge, meet with them at embedded world North America in November at the Anaheim Convention Center.


AI Security Goes Mainstream as Vendors Spend Heavily on M&A

One of the most significant operational gaps in AI adoption is the lack of runtime observability, with organizations struggling to know what data a model is ingesting or what it's producing. Observability answers these questions by providing a live view of AI behavior across prompts, responses and system interactions, and it is a precursor to regulating or securing AI systems. ... One of the biggest risks of GenAI in the enterprise is data leakage, with workers inadvertently pasting confidential information into a chatbot, models regurgitating sensitive data it was exposed to during training, or adversaries crafting prompts to extract private information through jailbreaking. Allowing AI access without control is equivalent to opening an unsecured API to your crown jewels. ... Output is just as risky as input with GenAI since an LLM could generate sensitive content, malicious code or incorrect results that are trusted by downstream systems or users. Palo Alto Networks' Arora noted the need for bi-directional inspection to watch not only what goes into large language models, but also what comes out. ... Another key challenge is defining identity in a non-human context, raising questions around how AI agents should be authenticated, what permissions AI agents should have and how to prevent escalation or impersonation. Enterprises must treat bots, copilots, model endpoints and LLM-backed workflows as identity-bearing entities that log in, take action, make decisions and access sensitive data.


Navigating the Techno-Future: Between Promise and Prudence

On one side are the techno-optimists: the believers in inexorable progress, the proponents of markets and innovation as self-correcting forces. They see every challenge as a technical problem and every failure as a design flaw waiting to be solved. On the other side are techno-pessimists: the prophets of collapse who warn that every new tool will inevitably accelerate inequality, erode democracy, or catalyze ecological catastrophe. They see history as a cautionary tale, and the present as a fragile prelude to systemic failure. Both perspectives share a common flaw: they treat the future as preordained. Optimists assume that progress will automatically yield good outcomes; pessimists assume that progress will inevitably lead to harm. Reality, however, is far less deterministic. Technology, in itself, is neutral. It amplifies human choices but does not dictate them. ... Just as a hammer can build a home or inflict injury, a powerful technology like artificial intelligence, gene editing, or blockchain can be used to improve lives or to exacerbate inequalities. The technology does not prescribe its use; humans do. This neutrality is both liberating and daunting. On the one hand, it affirms that progress is not predestined. The future is not a straight line determined by the mere existence of certain tools. 


CISOs prioritise real-time visibility as AI reshapes cloud security

The top priority for CISOs is real-time threat monitoring and comprehensive visibility into all data in motion across their organisations, supporting a defence-in-depth strategy. However, 97 percent of CISOs acknowledged making compromises in areas such as visibility gaps, tool integration and data quality, which they say limit their ability to fully secure and manage hybrid cloud environments. ... The reliance on AI is also causing a revision of how SOCs (security operations centres) function. Almost one in five CISOs reported lacking the appropriate tools to manage the increased network data volumes created by AI, underscoring that legacy log-based tools may not be fit for purpose against AI-powered threats. ... Rising data breaches, with a 17 percent increase year on year, are translating into greater pressure on CISOs, 45 percent of whom said they are now the main person held accountable in the event of a breach. There is also concern about stress and burnout within cybersecurity teams, which is driving a greater embrace of AI-based security tools. ... The adoption of AI is expected to have practical impacts, such as enabling junior analysts to perform at the same level as more experienced team members, reducing training costs, speeding up analysis while investigating threats, and improving overall visibility for the security function.


Serverless Security Risks Are Real, and Hackers Know It

Many believe, “No servers, no security risks.” That’s a myth. Nowadays, attackers take advantage of the specific security weaknesses found in serverless platforms. ... All serverless applications need third-party libraries for operation. Each function that depends on the compromised component becomes vulnerable to attack. An npm package experienced a hijack attack when hackers inserted a secret entry into its system. The incorporation of code by AWS Lambda resulted in the silent extraction of all environment variables. The unauthorized loss of API keys, credentials, and sensitive data, together with all other valuable information. The process finished in milliseconds, which was too brief for any security system to identify. ... As more companies are adopting serverless technologies, security risks become more widespread. So, it’s fundamental to validate that serverless environments are secure. Let’s explore the facts. Research dictates that serverless computing is expected to grow rapidly. According to Gartner’s July 2025 forecast, global IT spending will climb to $5.43 trillion, with enterprises investing billions into AI-driven cloud and data center infrastructure, making serverless platforms an increasingly critical, but often overlooked, security target.

Daily Tech Digest - May 07, 2025


Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad


Real-world use cases for agentic AI

There’s a wealth of public code bases on which models can be trained. And larger companies typically have their own code repositories, with detailed change logs, bug fixes, and other information that can be used to train or fine-tune an AI system on a company’s internal coding methods. As AI model context windows get larger, these tools can look through more and more code at once to identify problems or suggest fixes. And the usefulness of AI coding tools is only increasing as developers adopt agentic AI. According to Gartner, AI agents enable developers to fully automate and offload more tasks, transforming how software development is done — a change that will force 80% of the engineering workforce to upskill by 2027. Today, there are several very popular agentic AI systems and coding assistants built right into integrated development environments, as well as several startups trying to break into the market with an AI focus out of the gate. ... Not every use case requires a full agentic system, he notes. For example, the company uses ChatGPT and reasoning models for architecture and design. “I’m consistently impressed by these models,” Shiebler says. For software development, however, using ChatGPT or Claude and cutting-and-pasting the code is an inefficient option, he says.


Rethinking AppSec: How DevOps, containers, and serverless are changing the rules

Application security and developers have not always been on friendly terms, but the practice indicates that innovative security solutions are bridging the gaps, bringing developers and security closer together in a seamless fashion, with security no longer being a hurdle in developers’ daily work. Quite the contrary – security is nested in CI/CD pipelines, it’s accessible, non-obstructive, and it’s gone beyond scanning for waves and waves of false-positive vulnerabilities. It’s become, and is poised to remain, about empowering developers to fix issues early, in context, and without affecting delivery and its velocity. ... Another considerate battleground is identity. With reliance on distributed microservices, each component acts as both client and server, so misconfigured identity providers or weak token validation logic make room for lateral movement and exponentially increased attack opportunities. Without naming names, there are sufficient amounts of cases illustrating how breaches can occur from token forgery or authorization header manipulations. Additional headaches are exposed APIs and shadow services. Developers create new endpoints, and due to the fast pace of the process, they can easily escape scrutiny, further emphasizing the importance of continuous discovery and dynamic testing that will “catch” those endpoints and ensure they’re covered in securing the development process.


The Hidden Cost of Complexity: Managing Technical Debt Without Losing Momentum

Outdated, fragmented, or overly complex systems become the digital equivalent of cognitive noise. They consume bandwidth, blur clarity, and slow down both decision-making and delivery. What should be a smooth flow from idea to outcome becomes a slog. ... In short, technical debt introduces a constant low-grade drag on agility. It limits responsiveness. It multiplies cost. And like visual clutter, it contributes to fatigue—especially for architects, engineers, and teams tasked with keeping transformation moving. So what can we do?Assess System Health: Inventory your landscape and identify outdated systems, high-maintenance assets, and unnecessary complexity. Use KPIs like total cost of ownership, incident rates, and integration overhead. Prioritize for Renewal or Retirement: Not everything needs to be modernized. Some systems need replacement. Others, thoughtful containment. The key is intentionality. ... Technical debt is a measure of how much operational risk and complexity is lurking beneath the surface. It’s not just code that’s held together by duct tape or documentation gaps—it’s how those issues accumulate and impact business outcomes. But not all technical debt is created equal. In fact, some debt is strategic. It enables agility, unlocks short-term wins, and helps organizations experiment quickly. 


The Cost Conundrum of Cloud Computing

When exploring cloud pricing structures, the initial costs may seem quite attractive but after delving deeper to examine the details, certain aspects may become cloudy. The pricing tiers add a layer of complexity which means there isn’t a single recurring cost to add to the balance sheet. Rather, cloud fees vary depending on the provider, features, and several usage factors such as on-demand use, data transfer volumes, technical support, bandwidth, disk performance, and other core metrics, which can influence the overall solution’s price. However, the good news is there are ways to gain control of and manage these costs. ... Whilst understanding the costs associated with using a public cloud solution is critical, it is important to emphasise that modern cloud platforms provide robust, comprehensive and cutting-edge technologies and solutions to help drive businesses forward. Cloud platforms provide a strong foundation of physical infrastructure, robust platform-level services, and a wide array of resilient connectivity and data solutions. In addition, cloud providers continually invest in the security of their solutions to physically and logically secure the hardware and software layers with access control, monitoring tools, and stringent data security measures to keep the data safe.



Operating in the light, and in the dark (net)

While the takedown of sites hosting CSA cannot be directly described in the same light, the issue is ramping up. The Internet continues to expand - like the universe - and attempting to monitor it is a never-ending challenge. As IWF’s Sexton puts it: “Right now, the Internet is so big that its sort of anonymity with obscurity.” While some emerging (and already emerged) technologies such as AI can play a role in assisting those working on the side of the light - for example, the IWF has tested using AI for triage when assessing websites with thousands of images, and AI can be trained for content moderation by industry and others, the proliferation of AI has also added to the problem.AI-generated content has now also entered the scene. From a legality standpoint, it remains the same as CSA content. Just because an AI created it, does not mean that it’s permitted - at least in the UK where IWF primarily operates. “The legislation in the UK is robust enough to cover both real material, photo-realistic synthetic content, or sheerly synthetic content. The problem it does create is one of quantity. Previously, to create CSA, it would require someone to have access to a child and conduct abuse. “Then with the rise of the Internet we also saw an increase in self-generated content. Now, AI has the ability to create it without any contact with a child at all. People now have effectively an infinite ability to generate this content.”


Why LLM applications need better memory management

Developers assume generative AI-powered tools are improving dynamically—learning from mistakes, refining their knowledge, adapting. But that’s not how it works. Large language models (LLMs) are stateless by design. Each request is processed in isolation unless an external system supplies prior context. That means “memory” isn’t actually built into the model—it’s layered on top, often imperfectly. ... Some LLM applications have the opposite problem—not forgetting too much, but remembering the wrong things. Have you ever told ChatGPT to “ignore that last part,” only for it to bring it up later anyway? That’s what I call “traumatic memory”—when an LLM stubbornly holds onto outdated or irrelevant details, actively degrading its usefulness. ... To build better LLM memory, applications need: Contextual working memory: Actively managed session context with message summarization and selective recall to prevent token overflow. Persistent memory systems: Long-term storage that retrieves based on relevance, not raw transcripts. Many teams use vector-based search (e.g., semantic similarity on past messages), but relevance filtering is still weak. Attentional memory controls: A system that prioritizes useful information while fading outdated details. Without this, models will either cling to old data or forget essential corrections.


DARPA’s Quantum Benchmarking Initiative: A Make-or-Break for Quantum Computing

While the hype around quantum computing is certainly warranted, it is often blown out of proportion. This arises occasionally due to a lack of fundamental understanding of the field. However, more often, this is a consequence of corporations obfuscating or misrepresenting facts to influence the stock market and raise capital. ... If it becomes practically applicable, quantum computing will bring a seismic shift in society, completely transforming areas such as medicine, finance, agriculture, energy, and the military, to name a few. Nonetheless, this enormous potential has resulted in rampant hype around it, while concomitantly resulting in the proliferation of bad actors seeking to take advantage of a technology not necessarily well understood by the general public. On the other hand, negativity around the technology can also cause the pendulum to swing in the other direction. ... Quantum computing is at a critical juncture. Whether it reaches its promised potential or disappears into the annals of history, much like its many preceding technologies, will be decided in the coming years. As such, a transparent and sincere approach in quantum computing research leading to practically useful applications will inspire confidence among the masses, while false and half-baked claims will deter investments in the field, eventually leading to its inevitable demise.


The reality check every CIO needs before seeking a board seat

“CIOs think technology will get them to the boardroom,” says Shurts, who has served on multiple public- and private-company boards. “Yes, more boards want tech expertise, but you have to provide the right knowledge, breadth, and depth on topics that matter to their businesses.” ... Herein lies another conundrum for CIOs seeking spots on boards. Many see those findings and think they can help with that. But the context is more important. “In your operational role as a CIO, you’re very much involved in the details, solving problems every day,” Zarmi says. “On the board, you don’t solve the problems. You help, coach, mentor, ask questions, make suggestions, and impart wisdom, but you’re not responsible for execution.” That’s another change IT leaders need to make to position themselves for board seats. Luckily, there are tools that can help them make the leap. Quinlan, for example, got a certification from the National Association of Corporate Directors (NACD), which offers a variety of resources for aspiring board members. And he took it a few steps further by attaining a financial certification. Sure, he’d been involved in P&L management, but the certification helped him understand finance at the board’s altitude. He also added a cybersecurity certification even though he runs multi-hundred-million-dollar cyber programs. “Right, but I haven’t run it at the board, and I wanted to do that,” he says.


Applying the OODA Loop to Solve the Shadow AI Problem

Organizations should have complete visibility of their AI model inventory. Inconsistent network visibility arising from siloed networks, a lack of communication between security and IT teams, and point solutions encourages shadow AI. Complete network visibility must therefore become the priority for organizations to clearly see the extent and nature of shadow AI in their systems, thus promoting compliance, reducing risk, and promoting responsible AI use without hindering innovation. ... Organizations need to identify the effect of shadow AI once it has been discovered. This includes identifying the risks and advantages of such shadow software. ... Organizations must set clearly defined yet flexible policies regarding the acceptable use of AI to enable employees to use AI responsibly. Such policies need to allow granular control from binary approval to more sophisticated levels like providing access based on users’ role and responsibility, limiting or enabling certain functionalities within an AI tool, or specifying data-level approvals where sensitive data can be processed only in approved environments. ... Organizations must evaluate and formally incorporate shadow AI tools offering substantial value to ensure their use in secure and compliant environments. Access controls need to be tightened to avoid unapproved installations; zero trust and privilege management policies can assist in this regard. 


Cisco Pulls Together A Quantum Network Architecture

It will take a quantum network infrastructure to tie create a distributed quantum computing environment possible and allow it to scale more quickly beyond the relatively small number of qubits that are found in current and near-future systems, Cisco scientists wrote in a research paper. Such quantum datacenters involve “multiple QPUs [quantum processing units] … networked together, enabling a distributed architecture that can scale to meet the demands of large-scale quantum computing,” they wrote. “Ultimately, these quantum data centers will form the backbone of a global quantum network, or quantum internet, facilitating seamless interconnectivity on a planetary scale.” ... The entanglement chip will be central to an entire quantum datacenter the vendor is working toward, with new versions of what is found in current classical networks, including switches and NICs. “A quantum network requires fundamentally new components that work at the quantum mechanics level,” they wrote. “When building a quantum network, we can’t digitize information as in classical networks – we must preserve quantum properties throughout the entire transmission path. This requires specialized hardware, software, and protocols unlike anything in classical networking.” 

Daily Tech Digest - April 04, 2025


Quote for the day:

“Going into business for yourself, becoming an entrepreneur, is the modern-day equivalent of pioneering on the old frontier.” -- Paula Nelson



Hyperlight Wasm points to the future of serverless

WebAssembly support significantly expands the range of supported languages for Hyperlight, ensuring that compiled languages as well as interpreted ones like JavaScript can be run on a micro VM. Your image does get more complex here, as you need to bundle an additional runtime in the Hyperlight image, along with writing code that loads both runtime and application as part of the launch process. ... There’s a lot of work going on in the WebAssembly community to define a specification for a component model. This is intended to be a way to share binaries and libraries, allowing code to interoperate easily. The Hyperlight Wasm tool offers the option of compiling a development branch with support for WebAssembly Components, though it’s not quite ready for prime time. In practice, this will likely be the basis for any final build of the platform, as the specification is being driven by the main WebAssembly platforms. One point that Microsoft makes is that Wasm isn’t only language-independent, it’s architecture-independent, working against a minimal virtual machine. So, code written and developed on an x64 architecture system will run on Arm64 and vice versa, ensuring portability and allowing service providers to move applications to any spare capacity, no matter the host virtual machine.


Beyond SIEM: Embracing unified XDR for smarter security

Implementing SIEM solutions can have challenges and has to be managed proactively. Configuring the SIEM system can be very complex where any error can lead to false positives or missed threats. Integrating SIEM tools with existing security tools and systems is not easy. The implementation and maintenance processes are also resource-intensive and require significant time and manpower. Alert fatigue can be set with traditional SIEM platforms where numerous alerts are generated making it rather difficult to identify the genuine ones. ... For industries with stringent compliance requirements, such as finance and healthcare, SIEM remains a necessity due to its log retention, compliance reporting, and event correlation capabilities. Microsoft Sentinel’s AI-driven analytics help security teams fine-tune alerts, reducing false positives and increasing threat detection accuracy. Microsoft Defender XDR platform offers, Unified visibility across attack surfaces, CTEM Exposure management solution, CIS framework assessment, Zero Trust, EASM, AI-driven automated response to threats, Integrated security across all Microsoft 365 and third-party platforms, Office, Email, Data, CASB, Endpoint, Identity, and Reduced complexity by eliminating the need for custom configurations. 


Compliance Without Chaos: Build Resilient Digital Operations

A unified platform makes service ownership a no-brainer by directly connecting critical services to the right responders so there’s no scrambling when things go sideways. Teams can set up services quickly and at scale, making it easier to get a real-time pulse on system health and see just how far the damage spreads when something breaks. Instead of chasing down data across a dozen monitoring tools, everything is centralized in one place for easy analysis. ... With all data centralized in a unified platform, the classification and reporting of incidents is far easier with accessible and detailed incident logs that provide a clear audit trail. Sophisticated platforms also integrate with IT service management (ITSM) and IT operations (ITOps) tools to simplify the reporting of incidents based on predefined criteria. ... Every incident, both real and simulated, should be viewed as a learning opportunity. Aggregating data from disparate tools into a single location gives teams a full picture of how their organization’s operations have been affected and supplies a narrative for reporting. Teams can then uncover patterns across tools, teams and time to drive continuous learning in post-incident reviews. Coupled with regular, automated testing of disaster recovery runbooks, teams can build greater confidence in their system’s resilience.


How Organizations Can Benefit From Intelligent Data Infra

The first is getting your enterprise data AI-ready. Predictive AI has been around for a long time. But teams still spend a significant amount of time identifying and cleaning data, which involves handling ETL pipelines, transformations and loading data into data lakes. This is the most expensive step. The same process applies to unstructured data in generative AI. But organizations still need to identify the files and object streams that need to be a part of the training datasets. Organizations need to securely bring them together and load them into feature stores. That's our approach to data management. ... There's a lot of intelligence tied to files and objects. Without that, they will continue to be seen as simple storage entities. With embedded intelligence, you get detection capabilities that let you see what's inside a file and when it was last modified. For instance, if you create embeddings from a PDF file and vectorize them, imagine doing the same for millions of files, which is typical in AI training. This consumes significant computing resources. You don't want to spend compute resources while recreating embeddings on a million files every time there is a modification to the files. Metadata allows us to track changes and only reprocess the files that have been modified. This differential approach optimizes compute cycles.


Tariff war throws building of data centers into disarray

The potentially biggest variable affecting data center strategy is timing. Depending on the size of an enterprise data center and its purpose, it could take as little as six months to build, or as much as three years. Planning for a location is daunting when ever-changing tariffs and retaliatory tariffs could send costs soaring. Another critical element is knowing when those tariffs will take effect, a data point that has also been changing. Some enterprises are trying to sidestep the tariff issues by purchasing components in bulk, in enough quantities to potentially last a few years. ... “It’s not only space, available energy, cooling, and water resources, but it’s also a question of proximity to where the services are going to be used,” Nguyen said. Finding data center personnel, Nguyen said, is becoming less of an issue, thanks to the efficiencies gained through automation. “The level of automation available means that although personnel costs can be a bit more [in different countries], the efficiencies used means that [hiring people] won’t be the drag that it used to be,” he said. Given the vast amount of uncertainty, enterprise IT leaders wrestling with data center plans have some difficult decisions to make, mostly because they will have to guess where the tariff wars will be many months or years in the future, a virtually impossible task.


The Modern Data Architecture: Unlocking Your Data's Full Potential

If the Data Cloud is your engine, the CDP is your steering wheel—directing that power where it needs to go, precisely when it needs to get there. True real-time CDPs have the ability to transform raw data into immediate action across your entire technology ecosystem, with an event-based architecture that responds to customer signals in milliseconds rather than minutes. This ensures you can dynamically personalize experiences as they unfold—whether during a website visit, mobile app session, or contact center interaction–all while honoring consent. ... As AI capabilities evolve, this Intelligence Layer becomes increasingly autonomous—not just providing recommendations but taking appropriate actions based on pre-defined business rules and learning from outcomes to continuously improve its performance. ... The Modern Data Architecture serves as the foundation for truly intelligent customer experiences by making AI implementations both powerful and practical. By providing clean, unified data at scale, these architectures enable AI systems to generate more accurate predictions, more relevant recommendations, and more natural conversational experiences. Rather than creating isolated AI use cases, forward-thinking organizations are embedding intelligence throughout the customer journey. 


Why AI therapists could further isolate vulnerable patients instead of easing suffering

While chatbots can be programmed to provide some personalised advice, they may not be able to adapt as effectively as a human therapist can. Human therapists tailor their approach to the unique needs and experiences of each person. Chatbots rely on algorithms to interpret user input, but miscommunication can happen due to nuances in language or context. For example, chatbots may struggle to recognise or appropriately respond to cultural differences, which are an important aspect of therapy. A lack of cultural competence in a chatbot could alienate and even harm users from different backgrounds. So while chatbot therapists can be a helpful supplement to traditional therapy, they are not a complete replacement, especially when it comes to more serious mental health needs. ... The talking cure in psychotherapy is a process of fostering human potential for greater self-awareness and personal growth. These apps will never be able to replace the therapeutic relationship developed as part of human psychotherapy. Rather, there’s a risk that these apps could limit users’ connections with other humans, potentially exacerbating the suffering of those with mental health issues – the opposite of what psychotherapy intends to achieve.


Breaking Barriers in Conversational BI/AI with a Semantic Layer

The push for conversational BI was met with adoption inertia. Two major challenges have hindered its potential—the accuracy of the data insights and the speed at which the interface could provide the answers that were sought. This can be attributed to the inherent complexity of data architecture, which involves fragmented data in disparate systems with varying definitions, formats, and contexts. Without a unified structure, even the most advanced AI models risk delivering contextually irrelevant, inconsistent, or inaccurate results. Moreover, traditional data pipelines are not designed for instantaneous query resolution and resolving data from multiple tables, which delays responses. ... Large language models (LLMs) like GPT excel at interpreting natural language but lack the domain-specific knowledge of a data set. A semantic layer can resolve this challenge by acting as an intermediary between raw data and the conversational interface. It unifies data into a consistent, context-aware model that is comprehensible to both humans and machines. Retrieval-augmented generation (RAG) techniques are employed to combine the generative power of LLMs with the retrieval capabilities of structured data systems. 


The rise of AI PCs: How businesses are reshaping their tech to keep up

Companies are discovering that if they want to take full advantage of AI and run models locally, they need to upgrade their employees' laptops. This realization has introduced a hardware revolution, with the desire to update tech shifting from an afterthought to a priority and attracting significant investment from companies. ... running models locally gives organizations more control over their information and reduces reliance on third-party services. That setup is crucial for companies in financial services, healthcare, and other industries where privacy is a big concern or a regulatory requirement. "For them, on-device AI computer, it's not a nice to have; it's a need to have for fiduciary and HIPAA reasons, respectively," said Mike Bechtel, managing director and the chief futurist at Deloitte Consulting LLP. Another advantage is that local running reduces lag and creates a smoother user experience, which is especially valuable for optimizing business applications. ... As more companies get in on the action and AI-capable computers become ubiquitous, the premium price of AI PCs will continue to drop. Furthermore, Flower said the potential gains in performance offset any price differences. "In those high-value professions, the productivity gain is so significant that whatever small premium you're paying for that AI-enhanced device, the payback will be nearly immediate," said Flower.


Many CIOs operate within a culture of fear

The culture of fear often stems from a few roots, including a lack of accountability from employees who don’t understand their roles, and mistrust of coworkers and management, says Alex Yarotsky, CTO at Hubstaff, vendor of a time tracking and workforce management tool. In both cases, company leadership is to blame. Good leaders create a positive culture laid out in a set of rules and guidelines for employees to follow, and then model those actions themselves, Yarotsky says. “Any case of misunderstanding or miscommunication is always on the management because the management is the force in the company that sets the rules and drives the culture,” he adds. ... Such a culture often starts at the top, says Jack Allen, CEO and chief Salesforce architect at ITequality, a Salesforce consulting firm. Allen experienced this scenario in the early days of building a career, suggesting the problems may be bigger than the survey respondents indicate. “If the leader is unwilling to admit mistakes or punishes mistakes in an unfair way, then the next layer of leadership will be afraid to admit mistakes as well,” Allen says. ... Cultivating a culture of fear leads to several problems, including an inability to learn from mistakes, Mort says. “Organizations that do the best are those that value learning and highlight incidents as valuable learning events,” he says.

Daily Tech Digest - February 21, 2025


Quote for the day:

“The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things.” -- Ronald Reagan


Rethinking Network Operations For Cloud Repatriation

Repatriation introduces significant network challenges, further amplified by the adoption of disruptive technologies like SDN, SD-WAN, SASE and the rapid integration of AI/ML, especially at the edge. While beneficial, these technologies add complexity to network management, particularly in areas such as traffic routing, policy enforcement, and handling the unpredictable workloads generated by AI. ... Managing a hybrid environment spanning on-premises and public cloud resources introduces inherent complexity. Network teams must navigate diverse technologies, integrate disparate tools and maintain visibility across a distributed infrastructure. On-premises networks often lack the dynamic scalability and flexibility of cloud environments. Absorbing repatriated workloads further complicates existing infrastructure, making monitoring and troubleshooting more challenging. ... Repatriated workloads introduce potential security vulnerabilities if not seamlessly integrated into existing security frameworks. On-premises security stacks not designed for the increased traffic volume previously handled by SASE services can introduce latency and performance bottlenecks. Adjustments to SD-WAN routing and policy enforcement may be necessary to redirect traffic to on-premises security resources.


For the AI era, it’s time for BYOE: Bring Your Own Ecosystem

We can no longer limit user access to one or two devices — we must address the entire ecosystem. Instead of forcing users down a single, constrained path, security teams need to acknowledge that users will inevitably venture into unsafe territory, and focus on strengthening the security of the broader environment. In 2015, we as security practitioners could get by with placing “do not walk on the grass” signs and ushering users down manicured pathways. In 2025, we need to create more resilient grass. ... The risk extends beyond basic access. Forty-percent of employees download customer data to personal devices, while 33% alter sensitive data, and 31% approve large financial transactions. And, most alarming, 63% use personal accounts on their work laptops — most commonly Google — to share work files and create documents, effectively bypassing email filtering and data loss prevention (DLP) systems. ... Browser-based access exposes users to risks from malicious plugins, extensions and post authentication compromise, while the increasing reliance on SaaS applications creates opportunities for supply chain attacks. Personal accounts serve as particularly vulnerable entry points, allowing threat actors to leverage compromised credentials or stolen authentication tokens to infiltrate corporate networks.


DARPA continues work on technology to combat deepfakes

The rapid evolution of generative AI presents a formidable challenge in the arms race between deepfake creators and detection technologies. As AI-driven content generation becomes more sophisticated, traditional detection mechanisms are at a fast risk of becoming obsolete. Deepfake detection relies on training machine learning models on large datasets of genuine and manipulated media, but the scarcity of diverse and high-quality datasets can impede progress. Limited access to comprehensive datasets has made it difficult to develop robust detection systems that generalize across various media formats and manipulation techniques. To address this challenge, DARPA puts a strong emphasis on interdisciplinary collaboration. By partnering with institutions such as SRI International and PAR Technology, DARPA leverages cutting-edge expertise to enhance the capabilities of its deepfake detection ecosystem. These partnerships facilitate the exchange of knowledge and technical resources that accelerate the refinement of forensic tools. DARPA’s open research model also allows diverse perspectives to converge, fostering rapid innovation and adaptation in response to emerging threats. Deepfake detection also faces significant computational challenges. Training deep neural networks to recognize manipulated media requires extensive processing power and large-scale data storage.


AI Agents: Future of Automation or Overhyped Buzzword?

AI agents are not just an evolution of AI; they are a fundamental shift in IT operations and decision-making. These agents are being increasingly integrated into Predictive AIOps, where they autonomously manage, optimize, and troubleshoot systems without human intervention. Unlike traditional automation, which follows pre-defined scripts, AI agents dynamically predict, adapt, and respond to system conditions in real time. ... AI agents are transforming IT management and operational resilience. Instead of just replacing workflows, they now optimize and predict system health, automatically mitigating risks and reducing downtime. Whether it's self-repairing IT infrastructure, real-time cybersecurity monitoring, or orchestrating distributed cloud environments, AI Agents are pushing technology toward self-governing, intelligent automation. ... The future of AI agents is both thrilling and terrifying. Companies are investing in large action models — next-gen AI that doesn’t just generate text but actually does things. We’re talking about AI that can manage entire business processes or run a company’s operations without human intervention. ... AI agents aren’t just another tech buzzword — they represent a fundamental shift in how AI interacts with the world. Sure, we’re still in the early days, and there’s a lot of fluff in the market, but make no mistake: AI agents will change the way we work, live, and do business.


Optimizing Cloud Security: Managing Sprawl, Technical Debt, and Right-Sizing Challenges

Technical debt is the implied cost of future IT infrastructure rework caused by choosing expedient IT solutions like shortcuts, software patches or deferred IT upgrades over long-term, sustainable designs. It’s easily accrued when under pressure to innovate quickly but leads to waste and security gaps and vulnerabilities that compromise an organization’s integrity, making systems more susceptible to cyber threats. Technical debt can also be costly to eradicate, with companies spending an average of 20-40% of their IT budgets on addressing it. ... Cloud sprawl refers to the uncontrolled proliferation of cloud services, instances, and resources within an organization. It often results from rapid growth, lack of visibility, and decentralized decision-making. At Surveil, we have over 2.5 billion data points to lean on to identify trends and we know that organizations with unmanaged cloud environments can see up to 30% higher cloud costs due to redundant and idle resources.Unchecked cloud sprawl can lead to increased security vulnerabilities due to unmanaged and unmonitored resources. ... Right-sizing involves aligning IT resources precisely with the demands of applications or workloads to optimize performance and cost. Our data shows that organizations that effectively right-size their IT estate can reduce cloud costs by up to 40%, unlocking business value to invest in other business priorities. 


How businesses can avoid a major software outage

Software bugs and bad code releases are common culprits behind tech outages. These issues can arise from errors in the code, insufficient testing, or unforeseen interactions among software components. Moreover, the complexity of modern software systems exacerbates the risk of outages. As applications become more interconnected, the potential for failures increases. A seemingly minor bug in one component can have far-reaching consequences, potentially bringing down entire systems or services. ... The impact of backup failures can be particularly devastating as they often come to light during already critical situations. For instance, a healthcare provider might lose access to patient records during a primary system failure, only to find that their backup data is incomplete or corrupted. Such scenarios underscore the importance of not just having backup systems, but ensuring they are fully functional, up-to-date, and capable of meeting the organization's recovery needs. ... Human error remains one of the leading causes of tech outages. This can include mistakes made during routine maintenance, misconfigurations, or accidental deletions. In high-pressure environments, even experienced professionals can make errors, especially when dealing with complex systems or tight deadlines.


Serverless was never a cure-all

Serverless architectures were originally promoted as a way for developers to rapidly deploy applications without the hassle of server management. The allure was compelling: no more server patching, automatic scalability, and the ability to focus solely on business logic while lowering costs. This promise resonated with many organizations eager to accelerate their digital transformation efforts. Yet many organizations adopted serverless solutions without fully understanding the implications or trade-offs. It became evident that while server management may have been alleviated, developers faced numerous complexities. ... The pay-as-you-go model appears attractive for intermittent workloads, but it can quickly spiral out of control if an application operates under unpredictable traffic patterns or contains many small components. The requirement for scalability, while beneficial, also necessitates careful budget management—this is a challenge if teams are unprepared to closely monitor usage. ... Locating the root cause of issues across multiple asynchronous components becomes more challenging than in traditional, monolithic architectures. Developers often spent the time they saved from server management struggling to troubleshoot these complex interactions, undermining the operational efficiencies serverless was meant to provide.


AI Is Improving Medical Monitoring and Follow-Up

Artificial intelligence technologies have shown promise in managing some of the worst inefficiencies in patient follow-up and monitoring. From automated scheduling and chatbots that answer simple questions to review of imaging and test results, a range of AI technologies promise to streamline unwieldy processes for both patients and providers. ... Adherence to medication regimens is essential for many health conditions, both in the wake of acute health events and over time for chronic conditions. AI programs can both monitor whether patients are taking their medication as prescribed and urge them to do so with programmed notifications. Feedback gathered by these programs can indicate the reasons for non-adherence and help practitioners to devise means of addressing those problems. ... Using AI to monitor the vital signs of patients suffering from chronic conditions may help to detect anomalies -- and indicate adjustments that will stabilize them. Keeping tabs on key indicators of health such as blood pressure, blood sugar, and respiration in a regular fashion can establish a baseline and flag fluctuations that require follow up treatment using both personal and demographic data related to age and sex by comparing it to available data on similar patients.


IT infrastructure complexity hindering cyber resilience

Given the rapid evolution of cyber threats and continuous changes in corporate IT environments, failing to update and test resilience plans can leave businesses exposed when attacks or major outages occur. The importance of integrating cyber resilience into a broader organizational resilience strategy cannot be overstated. With cybersecurity now fundamental to business operations, it must be considered alongside financial, operational, and reputational risk planning to ensure continuity in the face of disruptions. ... Leaders also expect to face adversity in the near future with 60% anticipating a significant cybersecurity failure within the next six months, which reflects the sheer volume of cyber attacks as well as a growing recognition that cloud services are not immune to disruptions and outages. ... Eirst and most importantly, it removes IT and cybersecurity complexity–the key impediment to enhancing cyber resilience. Eliminating traditional security dependencies such as firewalls and VPNs not only reduces the organization’s attack surface, but also streamlines operations, cuts infrastructure costs, and improves IT agility. ... The second big win is the inability of attackers to move laterally should a compromise at an endpoint occur. Users are verified and given the lowest privileges necessary each time they access a corporate resource, meaning ransomware and other data-stealing threats are far less of a concern.


Is subscription-based networking the future?

There are several factors making NaaS an attractive proposition. One of the most significant is the growing demand for flexibility. Traditional networking models often require upfront investments and long-term commitments, which are restrictive for organisations that need to scale their infrastructure quickly or adapt to changing needs. In contrast, a subscription model allows businesses to pay only for what they use, making it easier to adjust capacity and features as needed. Cost efficiency is another big driver. With networking delivered as a service, organisations can move away from large capital expenditures toward predictable, operational costs. This helps IT teams manage budgets more effectively while reducing the need to maintain and upgrade hardware. It also enables companies to access new technologies without costly refresh cycles. Security and compliance are becoming increasingly complex, especially for companies handling sensitive data. NaaS solutions often come with built-in security updates, compliance tools, and proactive monitoring, helping businesses stay ahead of emerging threats. Instead of managing security in-house, IT teams can rely on service providers to ensure their networks remain protected and up to date. Additionally, the rise of cloud computing and hybrid work has accelerated the need for more agile and scalable networking solutions.