Daily Tech Digest - February 26, 2025


Quote for the day:

“Happiness is a butterfly, which when pursued, is always beyond your grasp, but which, if you will sit down quietly, may alight upon you.” -- Nathaniel Hawthorne


Deep dive into Agentic AI stack

The Tool / Retrieval Layer forms the backbone of an intelligent agent’s ability to gather, process, and apply knowledge. It enables the agent to retrieve relevant information from diverse data sources, ensuring it has the necessary context to make informed decisions and execute tasks effectively. By integrating various databases, APIs, and knowledge structures, this layer acts as a bridge between raw data and actionable intelligence, equipping the agent with a robust understanding of its environment. ... The Action / Orchestration Layer is a critical component in an intelligent agent’s architecture, responsible for transforming insights and understanding into concrete, executable actions. It serves as the bridge between perception and execution, ensuring that workflows are effectively managed, tasks are executed efficiently, and system interactions remain seamless. This layer must handle the complexity of decision-making, automation, and resource coordination while maintaining adaptability to dynamic conditions. ... The Reasoning Layer is where the agent’s cognitive processes take place, enabling it to analyse data, understand context, draw inferences, and make informed decisions. This layer bridges raw data retrieval and actionable execution by leveraging advanced AI models and structured reasoning techniques. 


AI Hijacked: New Jailbreak Exploits Chain-of-Thought

Several current AI models use chain-of-thought reasoning, an AI technique that helps large language models solve problems by breaking them down into a series of logical steps. The process aims to improve performance and safety by enabling the AI to verify its outputs. But "reasoning" also exposes a new attack surface, allowing adversaries to manipulate the AI's safety mechanisms. A research team comprising experts from Duke University, Accenture and Taiwan's National Tsing Hua University, found a vulnerability in how the models processed and displayed their reasoning. They developed a dataset called Malicious-Educator to test the vulnerability, designing prompts that tricked the models into overriding their built-in safety checks. These adversarial prompts exploited the AI's intermediate reasoning process, which is often displayed in user interfaces. ... The researchers acknowledged that they could be facilitating further jailbreaking attacks by publishing the Malicious-Educator dataset but argued that studying these vulnerabilities openly is necessary to develop stronger AI safety measures. A key distinction in this research is its focus on cloud-based models. AI models running in the cloud often include hidden safety filters that block harmful input prompts and moderate output in real-time. Local models lack these automatic safeguards unless users implement them manually. 


What CISOs need from the board: Mutual respect on expectations

The CISO requires specific and sustained support from the board to effectively protect the organization from cyber threats. A strong partnership between the CISO and board is essential for establishing and maintaining robust cybersecurity practices. My favourite saying one that CISO Robert Veres relayed to me: The board should support the “Red” and challenge the “Green.” This support is exactly what the CISO requires as a foundation. The board must help set the overall strategic direction that aligns with the organization’s risk appetite. This high-level guidance provides the framework within which the CISO can develop and implement security programs. While the CISO establishes the cyber risk culture, they need the board to reinforce this by setting the appropriate tone from the top and ensuring cybersecurity compliance is prioritized across all levels of management and business units. This is a difficult task for some boards as they may lack a good understanding of business and integration of the technology strategy. A critical requirement is for the CISO to have a strong mandate to operate with clear accountability. They need the authority to act and defend the enterprise without excessive interference, allowing them to respond quickly and effectively to emerging threats.


AI-Powered Ransomware Attacks

Consolidating artificial consciousness (simulated intelligence) into cyberattacks is, in a general sense, changing the dangerous scene, creating difficulties for all people and organizations. Generally, digital dangers have been, to a great extent, manual, depending on the inventiveness and flexibility of the aggressor. The idea of these dangers has developed as artificial brainpower has become more computerized, versatile, and practical. AI-based assaults can dissect immense measures of information to recognize weaknesses and send off profoundly designated phishing efforts to spread the most recent malware with negligible human intercession. The speed and execution of computer-based intelligence-fueled assaults imply that dangers can arise more suddenly than at any time in recent memory. For instance, simulated intelligence can mechanize the surveillance and observation stages and guide targets rapidly and precisely. This fast weakness, recognizable proof permits aggressors to take advantage of weaknesses before they are fixed, giving organizations less chance to respond. Additionally, AI can create modified malware that constantly evolves to evade detection using traditional security frameworks, making it more difficult to defend against.


AI Factories: Separating Hype From Reality

While the concept is compelling, will we see this wave of AI factories that Jensen is promising? Probably not at scale. AI hardware is not only costly to acquire and operate, but it also doesn’t run continuously like a database server. Once a model is trained, it may not need updates for months, leaving this expensive infrastructure sitting idle. For that reason, Alan Howard, senior analyst at Omdia specializing in infrastructure and data centers, believes most AI hardware deployments will occur in multipurpose data centers. ... AI tech advances rapidly, and keeping up with the competition is prohibitively expensive, Palaniappan added. “When you start looking at how much each of these GPUs cost, and it gets outdated quite pretty quickly, that becomes bottleneck,” he said. “If you are trying to leverage a data center, you’re always looking for the latest chip in the in the facility, so many of these data centers are losing money because of these efforts.” ... In addition to the cost of the GPUs, significant investment is required for networking hardware, as all the GPUs need to communicate with each other efficiently. Tom Traugott, senior vice president of strategy at EdgeCore Digital Infrastructure, explains that in a typical eight-GPU Nvidia DGX system, the GPUs communicate via NVlink. 


Overcoming Challenges of IT Integration in Cross-Border M&As

When companies agree to combine, things get complicated, particularly when blending their IT and digital operations. To that end, organizations must carefully outline how they plan to merge their IT departments to overcome associated challenges and avoid expensive disruptions. ... IT is the cornerstone of most multinational corporations. Determining how each merger participant will mesh its systems with the other is significant, particularly because 47% of M&A deals fail because of IT problems. IT due diligence is paramount. Not only does the process help identify priorities and risks beforehand, but it also lets the acquiring company properly evaluate the technical capabilities of the firm it intends to purchase. ... Cross-border M&As are subject to data privacy and compliance regulations that vary significantly across jurisdictions. When assessing an international merger, ensure there aren't any non-compliance risks and that the firm being acquired operates legitimately. Be aware of complex international data and privacy laws. Address any irregularities with a strong compliance strategy and retain expert legal counsel before signing the deal. ... In fact, cultural mismatch is one of the top reasons why M&As fail. 


10 machine learning mistakes and how to avoid them

Addressing biases is crucial to success in the modern AI landscape, Swita says. “Best practices include implementing continuous surveillance, alerting mechanisms, and content filtering to help proactively identify and rectify biased content,” he says. “Through these methodologies, organizations can develop AI frameworks that prioritize validated content.” To resolve bias, organizations need to embrace a dynamic approach that includes continually refining systems to keep pace with rapidly evolving models, Swita says. “Strategies need to be meticulously tailored for combating bias,” he says. ... Machine learning comes with certain legal and ethical risks. Legal risks include discrimination due to model bias, data privacy violations, security leaks, and intellectual property violations. These and other risks can have repercussions for developers and users of machine learning systems. Ethical risks include the potential for harm or exploitation, misuse of data, lack of transparency, and lack of accountability. Decisions based on machine learning algorithms can negatively affect individuals, even if that was not the intent. Swita reiterates the need to anchor models and output on trusted, validated, and regulated data. “By adhering to regulations and standards governing data usage and privacy, organizations can reduce the legal and ethical risks associated with machine learning,” he said.


Beyond the Buzz: What 2025's Tech Trends Mean for CIOs

Stemming from the large-scale deployment of AI is the issue of governance. Organizations need to use AI securely, responsibly and with accountability. A DLA Piper survey showed that 96% of firms using AI find governing AI systems a challenge. Some companies are already at the forefront of providing AI governance solutions. For instance, IBM Watsonx provides AI life cycle governance, risk management and regulatory compliance. Cisco AI Defense offers AI visibility, automated vulnerability scanning and real-time protections for AI assets. ... The rise of deepfakes and countless AI-generated misinformation campaigns have made disinformation security a crucial, non-negotiable imperative for enterprises. Although AI-based detection systems and blockchain-backed verification systems are evolving, they still lag behind the sophistication of adversarial tactics, pushing organizations toward adopting robust detection mechanisms and resilience strategies. ... Application of ambient intelligence in healthcare monitoring and for improving customer experience is already in the works. For instance, in early 2024, Texas-based Houston Methodist forged a partnership with Apella, a startup that uses ambient sensor technology and AI to improve surgical processes in operating rooms. 


AI, automation spur efforts to upskill network pros

By developing skills in networking monitoring, performance management, and cost optimization through automation and AI-powered tools, networking pros can become more adept at troubleshooting while offloading repetitive tasks such as copy-pasting configurations. Over time, they can gain the skills to better understand which behaviors and patterns to automate. According to Skillsoft’s Stanger, networking professionals can be challenged in finding the appropriate tasks and workflows to automate. ... “The continuous growth in cloud technologies ensures that cloud computing skills remain in high demand. This includes a thorough understanding of cloud infrastructure and services, which is becoming crucial,” Randstad’s Heins says. “Particularly challenging for companies to find are skills related to cloud service management, especially when combined with AI competencies.” Designing the appropriate network infrastructure, especially for cloud-first and hybrid environments, will be critical for networking pros looking to support sophisticated cloud environments. According to Greg Fuller, chief evangelist and vice president of Skillsoft/Codecademy, cloud computing, in some cases, can lead to complacency in networking as it allows more flexibility to spin up networks quickly.


The future of data security and governance: Why organizations must rethink their strategy

AI is transforming industries, but it’s also introducing new risks. Businesses are racing to integrate AI-powered applications, often without fully understanding the implications for data security and compliance. AI models require vast amounts of data, much of it sensitive, and without proper governance, they can become a significant liability. ... Regulatory bodies worldwide are tightening their grip on data privacy and security. From GDPR and CCPA to emerging AI regulations, the compliance landscape is becoming increasingly complex. Businesses can no longer afford to treat compliance as an afterthought — it must be embedded into every aspect of data management. ... Enterprises now manage data across multiple cloud environments, SaaS applications and third-party vendors. The result? A complex web of data assets, many of which are unprotected and difficult to track. Security teams struggle with: Lack of visibility into where sensitive data resides; Data access governance challenges; Increased vulnerability to cyber threats and insider risks ... The time to act is now. Security, AI, risk and governance leaders must take a proactive approach to data security and governance, ensuring that their organizations are not just reacting to threats but staying ahead of them.

Daily Tech Digest - February 25, 2025


Quote for the day:

"Empowerment is the magic wand that turns a frog into a prince. Never estimate the power of the people, through true empowerment great leaders are born." -- Lama S. Bowen


Service as Software Changes Everything

Service as software, also referred to as SaaS 2.0, goes beyond layering AI atop existing applications. It centers on the concept of automating business processes through intelligent APIs and autonomous services. The framework aims to eliminate human input and involvement through AI agents that act and react to conditions based on events, behavioral changes, and feedback. The result is autonomous software. “Traditional SaaS provides cloud-based tools where staff still do the work. Service as software flips that script. Instead of having staff do the work, you're making calls to an API or using software that does the work for you,” says Mark Strefford, founder of TimelapseAI, a UK-based consulting firm. ... CIOs and IT leaders should start small and iterate, experts say. As an organization gains confidence and trust, it can expand the autonomy of a SaaS 2.0 component. “More AI initiatives have failed from starting too big than too small,” Strefford notes. Consequently, it’s critical to understand the entire workflow, build in oversight and protections, establish measurement and validation tools, and stay focused on outcomes. A few factors can make or break an initiative, Giron says. Data quality and the ability to integrate across systems is crucial. A framework for standardization is critical. This includes cleaning, standardizing, and preparing legacy data. 


The Missing Sustainability Perspective in Cloud Architecture

The Well-Architected Framework provides a structured approach to making architectural decisions. While it originally focused on operational, security, and financial trade-offs, the Sustainability Pillar introduces specific guidance for designing cloud solutions with minimal environmental impact. One key architectural trade-off is between performance efficiency and sustainability. While performance efficiency emphasizes speed and low latency, these benefits often come at the cost of over-provisioning resources. A more sustainable approach involves optimizing compute resources to ensure they are only consumed when necessary. Serverless computing solutions, such as AWS Lambda or Azure Functions, help minimize idle capacity by executing workloads only when triggered. Similarly, auto-scaling for containerized applications, such as Kubernetes Horizontal Pod Autoscaler (HPA) or AWS Fargate, ensures that resources are dynamically adjusted based on demand, preventing unnecessary energy consumption. Another critical balance is between cost optimization and sustainability. Traditional cost optimization strategies focus on reducing expenses, but without considering sustainability, businesses might make short-term cost-saving decisions that lead to long-term environmental inefficiencies. For example, many organizations store large volumes of data without assessing its relevance, leading to excessive storage-related energy use.


Quantum Computing Has Arrived; We Need To Prepare For Its Impact

Many now believe that the power and speed of quantum computing will enable us to address some of the biggest and most difficult problems our civilization faces. Problem-solving will be made possible by quantum computing’s unprecedented processing speed and predictive analytics. That is a remarkable near-term potential. Mckinsey & Company forecasts that Quantum Technologies could create an economic value in the market of up to $2 trillion by 2035. Quantum measuring and sensing is one field where quantum technologies have already made their appearance. Navigational devices and magnetic resonance imaging already employ it. Quantum sensors detect and quantify minute changes in time, gravity, temperature, pressure, rotation, acceleration, frequency, and magnetic and electric fields using the smallest amounts of matter and energy. Quantum will have a direct impact on many scientific fields, including biology, chemistry, physics, and mathematics. Industry applications will have an impact on a wide range of fields, including healthcare, banking, communications, commerce, cybersecurity, energy, and space exploration. In other words, any sector in which data is a component. More specifically, quantum technology has incredible potential to transform a wide range of fields, including materials science, lasers, biotechnology, communications, genetic sequencing, and real-time data analytics.


Industrial System Cyberattacks Surge as OT Stays Vulnerable

"There's a higher propensity for manufacturing organizations to have cloud connectivity just as a way of doing business, because of the benefits of the public cloud for manufacturing, like for predictive analytics, just-in-time inventory management, and things along those lines," he says, pointing to Transportation Security Administration rules governing pipelines and logistics networks as one reason for the difference. "There is purposeful regulation to separate the IT-OT boundary — you tend to see multiple kinds of ring-fence layers of controls. ... There's a more conservative approach to outside-the-plant connectivity within logistics and transportation and natural resources," Geyer says. ... When it comes to cyber-defense, companies with operational technology should focus on protecting their most important functions, and that can vary by organization. One food-and-beverage company, for example, focuses on the most important production zones in the company, testing for weak and default passwords, checking for the existence of clear-text communications, and scanning for hard-coded credentials, says Claroty's Geyer. "The most important zone in each of their plants is milk receiving — if milk receiving fails, everything else is critical path and nothing can work throughout the plant," he says. 


How to create an effective incident response plan

“When you talk about BIA and RTOs [recovery time objective], you shouldn’t be just checking boxes,” Ennamli says. “You’re creating a map that shows you, and your decision-makers, exactly where to focus efforts when things go wrong. Basically, the nervous system of your business.” ... “And when the rubber hits the road during an actual incident, precious time is wasted on less important assets while critical business functions remain offline and not bringing in revenue,” he says. ... It’s vital to have robust communication protocols, says Jason Wingate, CEO at Emerald Ocean, a provider of brand development services. “You’re going to want a clear chain of command and communication,” he says. “Without established protocols, you’re about as effective as trying to coordinate a fire response with smoke signals.” The severity of the incident should inform the communications strategy, says David Taylor, a managing director at global consulting firm Protiviti. While cybersecurity team members actively responding to an incident will be in close contact and collaborating during an event, he says, others are likely not as plugged in or consistently informed. “Based on the assigned severity, stemming from the initial triage or a change to the level of severity based on new information during the response, governance should dictate the type, audience, and cadence of communications,” Taylor says.


AI-Powered DevOps: Transforming CI/CD Pipelines for Intelligent Automation

Traditional software testing faces challenges as organizations must assess codes to ensure they do not downgrade system performance or introduce bugs. Applications with extensive functionalities are time-consuming as they demand several test cases. They must ensure appropriate management, detailing their needs and advancing critical results in every scope. Nonetheless, smoke and regression testing ensures the same test cases are conducted, leading to time-consuming activities. The difficulty makes it hard for the traditional approach to have critical coverage of what is needed, and it is challenging to ensure that every approach can be tackled appropriately, channeling value toward the demanded selection. ... Using ML-driven test automation leads to increased efficiency in managing repetitive tasks. These automated measures ensure an accelerated testing approach, allowing teams to work with better activities. ML also integrates quality assessment into the software, marking an increasingly beneficial way to attend to individual requirements to ensure every software is assessed for high risk, potential failures and critical functions, which achieve a better post-deployment result. Additionally, using ML automation leads to cost savings, enabling testing cycles to have minimal operational costs as they are automated and prevent defects from being deployed within the software. 


Prompt Engineering: Challenges, Strengths, and Its Place in Software Development's Future

Prompt engineering and programming share the goal of instructing machines but differ fundamentally in their methodologies. While programming relies on formalized syntax, deterministic execution, and precision to ensure consistency and reliability, prompt engineering leverages the adaptability of natural language. This flexibility, however, introduces certain challenges, such as ambiguity, variability, and unpredictability. ... Mastering prompt engineering requires a level of knowledge and expertise comparable to programming. While it leverages natural language, its effective use demands a deep understanding of AI model behavior, the application of specific techniques, and a commitment to continuous learning. Similar to programming, prompt engineering involves continual learning to stay proficient with a variety of evolving techniques. A recent literature review by OpenAI and Microsoft analyzed over 1,500 prompt engineering-related papers, categorizing the various strategies into a formal taxonomy. This literature review is indicative of the continuous evolution of prompt engineering, requiring practitioners to stay informed and refine their approaches to remain effective.


Avoiding vendor lock-in when using managed cloud security services

An ideal managed cloud security provider should take an agnostic approach. Their solution should be compatible with whatever CNAPP or CSPM solution you use. This gives you maximum flexibility to find the right provider without locking yourself into a specific solution. Advanced services may even enable you to take open-sourced tooling and get to a good place before expanding to a full cloud security solution. You could also partner with a managed cloud security service that leverages open standards and protocols. This approach will allow you to integrate new or additional vendors while reducing your dependency on proprietary technology. Training and building in-house knowledge also helps. A confident service won’t keep their knowledge to themselves and helps enable and provide training to your team along the way. ... And there’s IAM—a more complex but equally concerning component of cloud security. In recent news, a few breaches started with low-level credentials being obtained before the attackers self-escalated themselves to gain access to sensitive information. This is often due to overly permissive access given to humans and machines. It’s also one of the least understood components of the cloud. Still, if your managed cloud security service truly understands the cloud, it won’t ignore IAM, the foundation of cloud security.


Observability Can Get Expensive. Here’s How to Trim Costs

“At its core, the ‘store it all’ approach is meant to ensure that when something goes wrong, teams have access to everything so they can pinpoint the exact location of the failure in their infrastructure,” she said. “However, this has become increasingly infeasible as infrastructure becomes more complex and ephemeral; there is now just too much to collect without massive expense.” ... “Something that would otherwise take developers weeks to do — take an inventory of all telemetry collected and eliminate the lower value parts — can be available at the click of a button,” she said. A proper observability platform can continually analyze telemetry data in order to have the most up-to-date picture of what is useful rather than a one-time, manual audit “that’s essentially stale as soon as it gets done,” Villa said. “It’s less about organizations wanting to pay less for observability tools, but they’re thinking more long-term about their investment and choosing platforms that will save them down the line,” she said. “The more they save on data collection, the more they can reinvest into other areas of observability, including new signals like profiling that they might not have explored yet.” Moving from a “store it all” to a “store intelligently” strategy is not only the future of cost optimization, Villa said, but can also help make the haystack of data smaller 


The Aftermath of a Data Breach

For organizations, the aftermath of a data breach can be highly devastating. In an interconnected world, a single data vulnerability can cascade into decades of irreversible loss – intellectual, monetary, and reputational. The consequences paralyze even the most established businesses, uprooting them from their foundation. ... The severity of a data breach often depends on how long it goes undetected; however, identifying the breach is where the story actually begins. From containing the destruction and informing authorities to answering customers and paying for their damages, the road to business recovery is long and grueling. ... Organizations must create, implement, and integrate a data management policy in their organizational setup that provides a robust framework for managing data throughout its entire lifecycle, from creation to disposal. This policy should also include a data destruction policy that specifies data destruction methods, data wiping tools, type of erasure verification, and records of data destruction. It should further cover media control and sanitization, incident reporting, the roles and responsibilities of the CIO, CISO, and privacy officer. Using a professional software-based data destruction tool erases data permanently from IT assets including laptops, PCs, and Mac devices. 

Daily Tech Digest - February 24, 2025


Quote for the day:

"A tough hide with a tender heart is a goal that all leaders must have." -- Wayde Goodall


A smarter approach to training AI models

AI models are beginning to hit the limits of compute. Model size is far outpacing Moore’s Law and the advances in AI training chips. Training runs for large models can cost tens of millions of dollars due to the cost of chips. This issue has been acknowledged by prominent AI engineers including Ilya Sutskever. The costs have become so high that Anthropic has estimated that it could cost as much to update Claude as it did to develop it in the first place. Companies like Amazon are spending billions to erect new AI data centers in an effort to keep up with the demands of building new frontier models. ... With a better foundational understanding of how AI works, we can approach AI model training and deployment in new ways that require a fraction of the energy and compute, bringing the rigor of other sciences to AI with a principles-first approach. ... By eschewing the inefficiencies and less theoretically justified parts of deep learning, we create a path forward to the next generation of truly intelligent AI, that we’ve seen surpasses the wall deep learning has hit. We have to understand how learning works and build models with interpretability and efficiency in mind from the ground up, especially as high-risk applications of AI in sectors like finance and healthcare demand more than the nondeterministic behavior we’ve become accustomed to. 


Strategic? Functional? Tactical? Which type of CISO are you?

Various factors influence what type of CISO a company may need, says Patton, a former CISO now working as a cybersecurity executive advisor at Cisco. A large, older company with a big, complicated tech stack will need someone with different skills, experience, and leadership qualities than a cloud-native startup that’s rapidly growing and changing. A heavily regulated industry such as financial services, healthcare, or utilities needs someone steeped in how to navigate all the compliance requirements. ... The path professionals take to the CISO seat also influences what type or types of CISOs they tend to be, adds Matt Stamper, CEO, CISO, and executive advisor with Executive Advisors Group as well as a board member with the ISACA San Diego chapter. Different career paths forge different types of executives, he says. Those who advanced through technical roles typically retain a technology bent, while those who came up through governance and risk functions usually gravitate toward compliance-focused roles. ... “CISOs should and tend to lean into where they’re gifted,” says Jenai Marinkovic, vCISO and CTO with Tiro Security and a member of the Emerging Trends Working Group with the IT governance association ISACA.


Becoming Ransomware Ready: Why Continuous Validation Is Your Best Defense

With the nature of IOCs being subtle and intentionally difficult to detect, how do you know that your XDR is effectively knipping them all in the bud? You hope that it is, but security leaders are using continuous ransomware validation to get a lot more certainty than that. By safely emulating the full ransomware kill chain - from initial access and privilege escalation to encryption attempts - tools like Pentera validate whether security controls, including EDR and XDR solutions, trigger the necessary alerts and responses. If key IOCs like shadow copy deletion, and process injection go undetected, then that's a crucial flag to prompt security teams to fine-tune detection rules and response workflows. ... Here's the reality: testing your defenses once a year leaves you exposed the other 364 days. Ransomware is constantly evolving, and so are the Indicators of Compromise (IOCs) used in attacks. Can you say with certainty that your EDR is detecting every IOC it should? The last thing you need to stress about is how threats are constantly changing into something your security tools will fail to recognize and aren't prepared to handle. That's why continuous ransomware validation is essential. With an automated process, you can continuously test your defenses to ensure they stand up against the latest threats.


US intensifies scrutiny of the EU’s Digital Markets Act

The DMA introduced unprecedented restrictions and requirements for companies designated as “gatekeepers” in the digital market. These companies must comply with a strict set of rules designed to prevent unfair business practices and ensure market accessibility for smaller competitors. The Act mandates various requirements including interoperability for core platform services, restrictions on personal data combination across services, and prohibition of self-preferencing practices in rankings and search results. “Big tech’s designated platforms can no longer unfairly promote their own products or services above yours (EU-based companies) in search results or ads,” one of the clauses of the DMA says pertaining to offering level playing. ... Meanwhile, the European Commission — where Ribera serves as the second-highest ranking official under President Ursula von der Leyen — maintains that these regulations are not targeted at US companies, according to the report. The Commission argued that the DMA is designed to ensure fair competition and consumer choice in digital markets, regardless of companies’ national origin. However, the predominance of US firms among those affected has intensified transatlantic tensions over digital policy.


The Technology Blueprint for CIOs: Expectations and Concerns

"Security sits at the front and center of business innovations, especially in sectors like banking and finance, where protecting user data and privacy is paramount. Every sector has its own unique challenges and opportunities, making a sector-driven approach essential," said Sachin Tayal, managing director at Protiviti member firm for India. AI-powered fraud detection systems are now integral, using behavior biometrics and facial recognition to detect and mitigate threats such as UPI frauds. Decentralized finance is also gaining traction, with blockchain-based solutions modernizing core banking functions and facilitating secure, transparent digital transactions, the report found. ... The industrial manufacturing sector is embracing Industry 4.0, characterized by the convergence of AI, IoT and cloud technologies. The market is seeing a shift toward digital twins and real-time analytics to optimize production processes. The integration of autonomous mobile robots and collaborative robots, cobots, is enhancing efficiency and safety on the production floor, the report said. ... CIOs have their work cut out - innovate or risk getting redundant. "Technology is driving businesses today, and the transformative role of the CIO amid the rapid rise of AI and digital innovations has never been more critical. The CIO now wears many hats - CTO, CISO and even CEO - as roles evolve to meet the demands of a digital-first world," Gupta said.


Man vs. machine: Striking the perfect balance in threat intelligence

One of the key things you must be aware of is your unconscious biases. Because we all have them. But being able to understand that and implement practices that challenge your assumptions, analysis and hypotheses is key to providing the best intelligence product. I think it’s a fascinating problem, particularly as it’s not necessarily something a SOC analyst or a vulnerability manager may consider, because it’s not really a part of their job to think that way, right? Fortunately, when it comes to working with the AI data, we can apply things like system prompts, we can be explicit in what we want to see as the output, and we can ask it to demonstrate where and why findings are identified, and their possible impact. Alongside that, I think the question also demonstrates the importance on why we as humans can’t forego things like training or maintaining skills. ... It’s also important that security continues to be a business enabler. There are times we interact with websites in countries that may have questionable points of view or human rights records. Does the AI block those countries because the training data indicates it shouldn’t support or provide access? Now some organisations will do domain blocking to an extreme level and require processes and approvals to access a website, it’s archaic and ridiculous in my opinion. Can AI help in that space? Almost certainly. 


AI and the Future of Software Testing: Will Human Testers Become Obsolete?

With generative AI tools, it has become possible to produce software testing code automatically. QA engineers can simply describe what they want to test and specify a testing framework, tool, or language, then let generative AI do the tedious work of writing out the code. Test engineers often need to validate and tweak the AI-generated code, just as software developers most often rework some parts of application code produced by AI. But by writing unit tests and other software tests automatically, AI can dramatically reduce the time that QA engineers spend creating tests. ... AI tools can also assist in evaluating test results. This is important because, in the past, a test failure typically meant that a QA engineer had to sit down with developers, figure out why the test failed, and formulate a plan for fixing whichever flaw triggered the issue. AI can automate this process in many cases by evaluating test results and corresponding application code and then making recommendations about how to fix an issue. Although it's not realistic to expect AI to be capable of entirely automating all software test assessments, it can do much of the tedious work. ... At the same time, though, AI will almost certainly reduce the need for human software testers, which could lead to some job losses in this area. 


From Convenience to Vulnerability: The Dual Role of APIs in Modern Services

Recently, a non-exploited vulnerability was discovered within a popular Travel Service that could have enabled attackers to take over victim accounts with a single click. Such an attack is called an "API Supply Chain Attack," in which an attacker chooses to attack a weaker link in the service's API ecosystem. While the takeover could occur within the integrated service, it likely would have provided attackers full access to the user's personally identifiable information (PII) from the main account, including all mileage and rewards data. Beyond mere data exposure, attackers could perform actions on behalf of the user, such as creating orders or modifying account details. This critical risk highlights the vulnerabilities in third-party integrations and the importance of stringent security protocols to protect users from unauthorized account access and manipulation. Vigilance, governance, and explicit control of APIs are essential for safeguarding against security gaps and vulnerabilities within API ecosystems. Organizations must prioritize investing in comprehensive API tools and software that support the entire API lifecycle. This includes identifying and cataloging all APIs in use to ensure visibility and control, continuously assessing and improving the security posture of APIs to mitigate risks, and implementing robust security measures to detect and respond to potential threats targeting APIs. 


Scientists Tested AI For Cognitive Decline. The Results Were a Shock.

Today, the famous large language model (LLM) is just one of several leading programs that appear convincingly human in their responses to basic queries. That uncanny resemblance may extend further than intended, with researchers from Israel now finding LLMs suffer a form of cognitive impairment similar to decline in humans, one that is more severe among earlier models. The team applied a battery of cognitive assessments to publicly available 'chatbots': versions 4 and 4o of ChatGPT, two versions of Alphabet's Gemini, and version 3.5 of Anthropic's Claude. Were the LLMs truly intelligent, the results would be concerning. In their published paper, neurologists Roy Dayan and Benjamin Uliel from Hadassah Medical Center and Gal Koplewitz, a data scientist at Tel Aviv University, describe a level of "cognitive decline that seems comparable to neurodegenerative processes in the human brain." For all of their personality, LLMs have more in common with the predictive text on your phone than the principles that generate knowledge using the squishy grey matter inside our heads. What this statistical approach to text and image generation gains in speed and personability, it loses in gullibility, building code according to algorithms that struggle to sort meaningful snippets of text from fiction and nonsense.


6 reasons so many IT orgs fail to exceed expectations today

“CIOs at large organizations know what they’ve got to hit. They know what they have to do to exceed expectations. But it’s more common that CIOs at smaller and less mature organizations have unclear objectives,” says Mark Taylor, CEO of the Society for Information Management (SIM). ... Doing all that work around expectation setting may still not be enough, as CIOs frequently find that the expectations set for them and their teams can shift suddenly. “Those moving targets happen all the time, especially when it comes to innovation,” says Peter Kreutter, WHU Otto Beisheim School of Management’s CIO Leadership Excellence Program faculty director and a member of the board of trustees for CIO Stiftung. ... “Fundamental challenges, such as legacy technology infrastructure and rigid operating cost structures, were at the core of failure rates,” the report reads. “These frequently limited the effectiveness of margin improvement initiatives and their impact on the bottom line. Unfortunately, this may only get worse, with uncertainty as a constant and the push for gen AI and data across enterprises.” ... Confusion about accountability — that is, who is really accountable for what results — is another obstacle for CIOs and IT teams as they aim high, according to Swartz.

Daily Tech Digest - February 23, 2025


Quote for the day:

“Success does not consist in never making mistakes but in never making the same one a second time.” --George Bernard Shaw



Google Adds Quantum-Resistant Digital Signatures to Cloud KMS

After a process that kicked off nearly a decade ago, NIST officially published the first three PQC standards last August. The standards, based on advanced encryption algorithms, are now known as FIPS 203, FIPS 204, and FIPS 205, although additional specifications are still under review by NIST. Google's strategy calls for support for the current and future NIST standards. While Cloud KMS will eventually support all three NIST standards, Google's initial release implements the two digital signature algorithms: FIPS 204, which enables lattice-based digital signatures, and FIPS 205, which is for stateless hash-based digital signatures. Porter says support for FIPS 203, which is for asymmetric cryptography, will come later in the year. ... "Making the open source libraries and Cloud KMS to support those specific signatures with those keys will give the opportunity for our customers to validate those performance implications to their environments when they use those keys for the signing of longer linked environments," Porter explains. Google is not the only major player adding open source libraries that support the NIST standards. In September, Microsoft started releasing support for the NIST standards in SymCrypt, its open source core cryptographic library main cryptographic library used in Azure, Microsoft 365, Windows 11, Windows 10, Windows Server, Azure Stack HCI, and Azure Linux. 


The most critical job skill you need to thrive in the AI revolution

A few weeks ago, The World Economic Forum dropped its predictions for the future of jobs and the seismic shift in the workforce over the next five years (2030). ... Half of the employers plan to reorient business strategies in response to the rise of AI. In fact, 2 in 3 plan to hire for AI-specific skills (this is where the new jobs will come from). 40% of those same businesses also think their workforce will shrink due to AI automating tasks. On the surface, this might seem like doom and gloom, but remember, we are talking about 78 million new jobs by 2030. It is safe to assume some of that workforce will find employment in companies that don't exist yet. Another insight that stood out to me but deserves its own article is that an aging population will drive the demand for more healthcare jobs. This could be a huge opportunity. Let me know in the comments if you want me to discuss the possibilities. ... As for your big opportunity, I feel like everyone is so focused on the shiny objects, like what are the best prompts or the best tool? Those are fine, but not enough focus is placed on the soft skills. It's as if we're forgetting that even though we use AI to create, our creations are still intended for humans. If I had to say it another way, it is almost like some businesses are using AI and becoming sloppy. Not caring about the customer, and so on.


MDR, EDR Markets See Wave of M&A as Competition Intensifies

Organizations traditionally relied on managed security services for log monitoring and basic alerting. MDR took this a step further by offering real-time threat detection, investigation and response. At the same time, vendors came to realize that endpoint visibility alone through EDR was insufficient, leading to XDR, which integrates signals from multiple layers, including cloud, network and identity systems. "It's complicated to learn the skills to be able to operate these kinds of platforms really efficiently, and it's even more challenging to be able to do it 24/7/365," Levy said. "Most organizations simply aren't equipped to be able to run a global SOC with multiple shifts." While XDR expanded detection capabilities, Levy said it also introduced operational complexities, with most companies lacking the expertise and resources to manage a sophisticated security platform 24/7, leading to the rise of MDR as a fully managed security service. True MDR should go beyond the endpoint and include threat detection across cloud environments, networks and identity systems, Schneider said. "Once partners get engaged and really see the value in managed EDR, the conversation immediately goes to, 'Can you do the same thing for my firewalls? Can you do the same thing for my NDR solution? Can you do the same thing for my identity solution?'" 


We need to talk about the F word (‘friction’ in enterprise, that is)

By striking the right balance, companies can use friction to their advantage. Friction, after all, is another word for feedback — so products that become completely frictionless stop responding to users’ needs. The pursuit of frictionlessness can launch you skywards, but over time you’ll struggle to course-correct. Eventually, gravity will drag you back to earth. This isn’t hypothetical: Research shows that friction makes many systems — including businesses — smarter and more resilient. A bit of strategic inconvenience can improve market performance, with investors making smarter decisions when they’re forced to slow down and think about trades. ... For technologists, that means asking: What problems are you solving by eliminating friction — and what problems might you create, now or in future, by doing so? Every design choice brings tradeoffs, but balancing risks and rewards to design for the right level of friction enables both rapid growth and long-term sustainability. Such an approach could also make it easier to have grown-up conversations about the need to regulate AI and other emerging technologies. Regulations always add friction — but once we accept that some friction can be valuable, we can work collaboratively with policymakers to find the right level of friction to support innovation while protecting and respecting consumers.


Struggling to Become Truly Data-Driven? Focus on Access and Culture, Not Tech

Success in data strategy requires strong leadership commitment and cultural transformation. The playbook emphasizes the role of leaders in advancing data literacy and encouraging data-driven decision-making. This includes identifying and empowering "data champions" across the organization and creating communities of practice to share knowledge and best practices. Training and development play crucial roles in building data capabilities. The report recommends targeted training programs for employees central to data usage, utilizing both online and in-person resources. Investment in training yields significant returns through improved efficiency, better decision-making, and enhanced customer service. However, training should not be a one-size-fits-all approach; it should be tailored to different roles and skill levels within the organization. The report emphasizes that becoming a data-driven organization is an ongoing journey rather than a destination. Financial institutions must continuously evolve their data strategies to keep pace with changing technology and customer expectations. This includes exploring emerging technologies like artificial intelligence and machine learning, while ensuring they maintain a strong foundation in data quality and governance.


Introduction to Service Mesh

A service mesh acts as a layer encompassing services running within a distributed application that facilitates dependable and visible communication among microservices. It oversees how services interact with one another, handling tasks such as discovering services, distributing workloads evenly, recovering from failures, collecting metrics and monitoring performance. ... By separating network management duties from the application code, a service mesh makes it easier for developers and operations teams to handle tasks efficiently. Developers can concentrate on creating business logic without the need to deal with integrating service discovery, load balancing or security protocols into their applications. Operations teams can take advantage of the management of policies and configurations provided by the service mesh’s control plane. ... When selecting a service mesh, it’s important to consider scalability. Make sure that the service mesh is capable of accommodating the size of your microservices setup and can adapt as your application grows. Assess how the service mesh affects your system’s performance and the load added by sidecar proxies. A scalable service mesh should deliver performance and minimal delays when adding more services and incurring higher traffic levels.


Why enterprises fail at finops

One of the most significant challenges is the lack of integration between the finops and engineering teams responsible for building and deploying cloud applications. McKinsey’s report showed that many organizations struggle to capture savings beyond the immediate finops team’s mandate because these teams often lack the incentives or access to cloud cost data. Consequently, many well-meaning optimization efforts fall by the wayside as engineers juggle multiple priorities or lack the resources to focus on cost-related improvements. Another issue is the lack of systematic implementation of finops best practices. This is where FaC becomes essential by incorporating finops processes directly into application configurations to make them foolproof. FaC can dramatically reduce costs by integrating financial management principles directly into the infrastructure management life cycle. Organizations can enforce budget constraints by automatically identifying opportunities for cost reduction, supporting more efficient resource scheduling, and employing cloud-native services to decrease operational cloud resource expenses. Many organizations struggle with basic cloud hygiene practices. They’re not effectively identifying and eliminating obvious sources of waste, such as underutilized resources, oversized virtual machines, and redundant storage volumes. 


Building the next-gen creator economy with AI agents

Autonomous agents simplify content distribution and monetization by automating tasks such as pricing, licensing, and revenue sharing, freeing creators to focus on their craft. For instance, these agents can optimize pricing strategies based on market demand or manage revenue splits transparently. Unlike traditional AI tools, decentralized agents can operate trustlessly onchain, ensuring transparency, reducing costs, and eliminating third-party intermediaries. By leveraging programmable rules and onchain verification, autonomous agents also allow creators to explore new revenue streams—such as micro-licensing or fractional ownership of digital assets—giving them control over their intellectual property while tapping into innovative monetization models. Ethical concerns, such as licensing and copyright issues, can be addressed through programmable licensing rights embedded in content metadata. ... The use of trustless, onchain computation means that creators are not reliant on centralized APIs or platforms, which could compromise their data or artistic vision. Unlike many current AI agents that depend on centralized APIs like OpenAI, these decentralized agents operate sustainably and transparently, avoiding vulnerabilities tied to centralized control. 


The Future of Cybersecurity: AI-Driven Threat Detection and Prevention

Artificial intelligence has revolutionized the way organizations respond to threat detection. Contemporary AI systems are capable of examining huge volumes of network traffic, log data, and user activity in real-time, detecting subtle patterns that could represent a security compromise. AI-powered Security Information and Event Management (SIEM) solutions can examine billions of security events per day, correlating seemingly unrelated activity to reveal advanced attack campaigns. ... Machine learning algorithms are now shifting from reactive security to predictive threat prevention. By examining past patterns of attacks and present system activity, AI can detect potential security threats before they become real threats. This is especially effective in insider threat detection, where AI algorithms can detect slight variations in employee behavior that could be a sign of compromise or malicious activity. ... When an incident is detected, AI-based security orchestration platforms can respond automatically, cutting in half the lag time between detection and mitigation. They can isolate infected systems, withdraw misused credentials, and apply countermeasures in seconds – operations that it would take human teams hours or even days to do manually.


Generative AI is already being used in journalism – here’s how people feel about it

What if the AI identifies something or someone incorrectly, and these keywords lead to mis-identifications in the photo captions? What if the criteria humans think make “good” images are different to what a computer might think? These criteria may also change over time or in different contexts. Even something as simple as lightening or darkening an image can cause a furore when politics are involved. AI can also make things up completely. Images can appear photorealistic but show things that never happened. Videos can be entirely generated with AI, or edited with AI to change their context. Generative AI is also frequently used for writing headlines or summarising articles. These sound like helpful applications for time-poor individuals, but some news outlets are using AI to rip off others’ content. AI-generated news alerts have also gotten the facts wrong. ... Overall, our participants felt most comfortable with journalists using AI for brainstorming or for enriching already created media. This was followed by using AI for editing and creating. But comfort depends heavily on the specific use. Most of our participants were comfortable with turning to AI to create icons for an infographic.

Daily Tech Digest - February 21, 2025


Quote for the day:

“The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things.” -- Ronald Reagan


Rethinking Network Operations For Cloud Repatriation

Repatriation introduces significant network challenges, further amplified by the adoption of disruptive technologies like SDN, SD-WAN, SASE and the rapid integration of AI/ML, especially at the edge. While beneficial, these technologies add complexity to network management, particularly in areas such as traffic routing, policy enforcement, and handling the unpredictable workloads generated by AI. ... Managing a hybrid environment spanning on-premises and public cloud resources introduces inherent complexity. Network teams must navigate diverse technologies, integrate disparate tools and maintain visibility across a distributed infrastructure. On-premises networks often lack the dynamic scalability and flexibility of cloud environments. Absorbing repatriated workloads further complicates existing infrastructure, making monitoring and troubleshooting more challenging. ... Repatriated workloads introduce potential security vulnerabilities if not seamlessly integrated into existing security frameworks. On-premises security stacks not designed for the increased traffic volume previously handled by SASE services can introduce latency and performance bottlenecks. Adjustments to SD-WAN routing and policy enforcement may be necessary to redirect traffic to on-premises security resources.


For the AI era, it’s time for BYOE: Bring Your Own Ecosystem

We can no longer limit user access to one or two devices — we must address the entire ecosystem. Instead of forcing users down a single, constrained path, security teams need to acknowledge that users will inevitably venture into unsafe territory, and focus on strengthening the security of the broader environment. In 2015, we as security practitioners could get by with placing “do not walk on the grass” signs and ushering users down manicured pathways. In 2025, we need to create more resilient grass. ... The risk extends beyond basic access. Forty-percent of employees download customer data to personal devices, while 33% alter sensitive data, and 31% approve large financial transactions. And, most alarming, 63% use personal accounts on their work laptops — most commonly Google — to share work files and create documents, effectively bypassing email filtering and data loss prevention (DLP) systems. ... Browser-based access exposes users to risks from malicious plugins, extensions and post authentication compromise, while the increasing reliance on SaaS applications creates opportunities for supply chain attacks. Personal accounts serve as particularly vulnerable entry points, allowing threat actors to leverage compromised credentials or stolen authentication tokens to infiltrate corporate networks.


DARPA continues work on technology to combat deepfakes

The rapid evolution of generative AI presents a formidable challenge in the arms race between deepfake creators and detection technologies. As AI-driven content generation becomes more sophisticated, traditional detection mechanisms are at a fast risk of becoming obsolete. Deepfake detection relies on training machine learning models on large datasets of genuine and manipulated media, but the scarcity of diverse and high-quality datasets can impede progress. Limited access to comprehensive datasets has made it difficult to develop robust detection systems that generalize across various media formats and manipulation techniques. To address this challenge, DARPA puts a strong emphasis on interdisciplinary collaboration. By partnering with institutions such as SRI International and PAR Technology, DARPA leverages cutting-edge expertise to enhance the capabilities of its deepfake detection ecosystem. These partnerships facilitate the exchange of knowledge and technical resources that accelerate the refinement of forensic tools. DARPA’s open research model also allows diverse perspectives to converge, fostering rapid innovation and adaptation in response to emerging threats. Deepfake detection also faces significant computational challenges. Training deep neural networks to recognize manipulated media requires extensive processing power and large-scale data storage.


AI Agents: Future of Automation or Overhyped Buzzword?

AI agents are not just an evolution of AI; they are a fundamental shift in IT operations and decision-making. These agents are being increasingly integrated into Predictive AIOps, where they autonomously manage, optimize, and troubleshoot systems without human intervention. Unlike traditional automation, which follows pre-defined scripts, AI agents dynamically predict, adapt, and respond to system conditions in real time. ... AI agents are transforming IT management and operational resilience. Instead of just replacing workflows, they now optimize and predict system health, automatically mitigating risks and reducing downtime. Whether it's self-repairing IT infrastructure, real-time cybersecurity monitoring, or orchestrating distributed cloud environments, AI Agents are pushing technology toward self-governing, intelligent automation. ... The future of AI agents is both thrilling and terrifying. Companies are investing in large action models — next-gen AI that doesn’t just generate text but actually does things. We’re talking about AI that can manage entire business processes or run a company’s operations without human intervention. ... AI agents aren’t just another tech buzzword — they represent a fundamental shift in how AI interacts with the world. Sure, we’re still in the early days, and there’s a lot of fluff in the market, but make no mistake: AI agents will change the way we work, live, and do business.


Optimizing Cloud Security: Managing Sprawl, Technical Debt, and Right-Sizing Challenges

Technical debt is the implied cost of future IT infrastructure rework caused by choosing expedient IT solutions like shortcuts, software patches or deferred IT upgrades over long-term, sustainable designs. It’s easily accrued when under pressure to innovate quickly but leads to waste and security gaps and vulnerabilities that compromise an organization’s integrity, making systems more susceptible to cyber threats. Technical debt can also be costly to eradicate, with companies spending an average of 20-40% of their IT budgets on addressing it. ... Cloud sprawl refers to the uncontrolled proliferation of cloud services, instances, and resources within an organization. It often results from rapid growth, lack of visibility, and decentralized decision-making. At Surveil, we have over 2.5 billion data points to lean on to identify trends and we know that organizations with unmanaged cloud environments can see up to 30% higher cloud costs due to redundant and idle resources.Unchecked cloud sprawl can lead to increased security vulnerabilities due to unmanaged and unmonitored resources. ... Right-sizing involves aligning IT resources precisely with the demands of applications or workloads to optimize performance and cost. Our data shows that organizations that effectively right-size their IT estate can reduce cloud costs by up to 40%, unlocking business value to invest in other business priorities. 


How businesses can avoid a major software outage

Software bugs and bad code releases are common culprits behind tech outages. These issues can arise from errors in the code, insufficient testing, or unforeseen interactions among software components. Moreover, the complexity of modern software systems exacerbates the risk of outages. As applications become more interconnected, the potential for failures increases. A seemingly minor bug in one component can have far-reaching consequences, potentially bringing down entire systems or services. ... The impact of backup failures can be particularly devastating as they often come to light during already critical situations. For instance, a healthcare provider might lose access to patient records during a primary system failure, only to find that their backup data is incomplete or corrupted. Such scenarios underscore the importance of not just having backup systems, but ensuring they are fully functional, up-to-date, and capable of meeting the organization's recovery needs. ... Human error remains one of the leading causes of tech outages. This can include mistakes made during routine maintenance, misconfigurations, or accidental deletions. In high-pressure environments, even experienced professionals can make errors, especially when dealing with complex systems or tight deadlines.


Serverless was never a cure-all

Serverless architectures were originally promoted as a way for developers to rapidly deploy applications without the hassle of server management. The allure was compelling: no more server patching, automatic scalability, and the ability to focus solely on business logic while lowering costs. This promise resonated with many organizations eager to accelerate their digital transformation efforts. Yet many organizations adopted serverless solutions without fully understanding the implications or trade-offs. It became evident that while server management may have been alleviated, developers faced numerous complexities. ... The pay-as-you-go model appears attractive for intermittent workloads, but it can quickly spiral out of control if an application operates under unpredictable traffic patterns or contains many small components. The requirement for scalability, while beneficial, also necessitates careful budget management—this is a challenge if teams are unprepared to closely monitor usage. ... Locating the root cause of issues across multiple asynchronous components becomes more challenging than in traditional, monolithic architectures. Developers often spent the time they saved from server management struggling to troubleshoot these complex interactions, undermining the operational efficiencies serverless was meant to provide.


AI Is Improving Medical Monitoring and Follow-Up

Artificial intelligence technologies have shown promise in managing some of the worst inefficiencies in patient follow-up and monitoring. From automated scheduling and chatbots that answer simple questions to review of imaging and test results, a range of AI technologies promise to streamline unwieldy processes for both patients and providers. ... Adherence to medication regimens is essential for many health conditions, both in the wake of acute health events and over time for chronic conditions. AI programs can both monitor whether patients are taking their medication as prescribed and urge them to do so with programmed notifications. Feedback gathered by these programs can indicate the reasons for non-adherence and help practitioners to devise means of addressing those problems. ... Using AI to monitor the vital signs of patients suffering from chronic conditions may help to detect anomalies -- and indicate adjustments that will stabilize them. Keeping tabs on key indicators of health such as blood pressure, blood sugar, and respiration in a regular fashion can establish a baseline and flag fluctuations that require follow up treatment using both personal and demographic data related to age and sex by comparing it to available data on similar patients.


IT infrastructure complexity hindering cyber resilience

Given the rapid evolution of cyber threats and continuous changes in corporate IT environments, failing to update and test resilience plans can leave businesses exposed when attacks or major outages occur. The importance of integrating cyber resilience into a broader organizational resilience strategy cannot be overstated. With cybersecurity now fundamental to business operations, it must be considered alongside financial, operational, and reputational risk planning to ensure continuity in the face of disruptions. ... Leaders also expect to face adversity in the near future with 60% anticipating a significant cybersecurity failure within the next six months, which reflects the sheer volume of cyber attacks as well as a growing recognition that cloud services are not immune to disruptions and outages. ... Eirst and most importantly, it removes IT and cybersecurity complexity–the key impediment to enhancing cyber resilience. Eliminating traditional security dependencies such as firewalls and VPNs not only reduces the organization’s attack surface, but also streamlines operations, cuts infrastructure costs, and improves IT agility. ... The second big win is the inability of attackers to move laterally should a compromise at an endpoint occur. Users are verified and given the lowest privileges necessary each time they access a corporate resource, meaning ransomware and other data-stealing threats are far less of a concern.


Is subscription-based networking the future?

There are several factors making NaaS an attractive proposition. One of the most significant is the growing demand for flexibility. Traditional networking models often require upfront investments and long-term commitments, which are restrictive for organisations that need to scale their infrastructure quickly or adapt to changing needs. In contrast, a subscription model allows businesses to pay only for what they use, making it easier to adjust capacity and features as needed. Cost efficiency is another big driver. With networking delivered as a service, organisations can move away from large capital expenditures toward predictable, operational costs. This helps IT teams manage budgets more effectively while reducing the need to maintain and upgrade hardware. It also enables companies to access new technologies without costly refresh cycles. Security and compliance are becoming increasingly complex, especially for companies handling sensitive data. NaaS solutions often come with built-in security updates, compliance tools, and proactive monitoring, helping businesses stay ahead of emerging threats. Instead of managing security in-house, IT teams can rely on service providers to ensure their networks remain protected and up to date. Additionally, the rise of cloud computing and hybrid work has accelerated the need for more agile and scalable networking solutions.