Daily Tech Digest - March 27, 2025


Quote for the day:

"Leadership has a harder job to do than just choose sides. It must bring sides together." -- Jesse Jackson


Can AI Fix Digital Banking Service Woes?

For banks in India, an AI-driven system for handling customer complaints can be a game changer by enhancing operational efficiency, boosting customer trust and ensuring strict regulatory compliance. The success of this system hinges on addressing data security, integrating with legacy systems, and multi-lingual challenges while fostering a culture of continuous improvement. "By following this detailed road map, banks can build a resilient AI system that not only improves customer service but also supports broader financial risk management and compliance objectives, said Abhay Johorey, managing director, Protiviti Member Firm for India. An AI chatbot could drive operational efficiency, perform enhanced data analytics and risk management, increase customer trust and have compliance benefits if designed well. A badly executed one could run the risk of providing inaccurate financial information to customers or infringe on their privacy and data. ... "We are entering a transformative era where AI can significantly improve the speed, accuracy and fairness of complaint resolution. AI can categorize complaints based on urgency, complexity or subject matter, ensuring faster escalation to the appropriate teams. AI optimizes complaint routing and assists in decision-making, reducing processing times," the RBI said.


Ethernet roadmap: AI drives high-speed, efficient Ethernet networks

The Ethernet Alliance’s 10th anniversary roadmap references the consortium’s 2024 Technology Exploration Forum (TEF), which highlighted the critical need for collaboration across the Ethernet ecosystem: “Industry experts emphasized the importance of uniting different sectors to tackle the engineering challenges posed by the rapid advancement of AI. This collective effort is ensuring that Ethernet will continue to evolve to provide the network functionality required for next-generation AI networks.” Some of those engineering challenges include congestion management, latency, power consumption, signaling, and the ever-increasing speed of the network. ... “One of the outcomes of [the TEF] event was the realization the development of 400Gb/sec signaling would be an industry-wide problem. It wasn’t solely an application, network, component, or interconnect problem,” stated D’Ambrosia, who is a distinguished engineer with the Datacom Standards Research team at Futurewei Technologies, a U.S. subsidiary of Huawei, and the chair of the IEEE P802.3dj 200Gb/sec, 400Gb/sec, 800Gb/sec and 1.6Tb/sec Task Force. “Overcoming the challenges to support 400 Gb/s signaling will likely require all the tools available for each of the various layers and components.”


Dealing With Data Overload: How to Take Control of Your Security Analytics

Organizations face several challenges when it comes to security analytics. They need to find a better way to optimize high volumes of data, ensure they are getting maximum bang for the buck, and bring balance between cost and visibility. This allows more of the "right" or optimized data to be brought in for advanced analytics, filtering out the noise or useless data that isn't needed for analytics/machine learning. ... If you're a SOC manager, and your team is triaging alerts all day, perhaps you've got one full-time staffer who does nothing but look at Microsoft O365 alerts, and another person who just looks at Proofpoint alerts. The goal is to think about the bigger operational picture. When searching for a solution, it's easy to focus only on your immediate challenges and overlook future ones. As a result, you invest in a fix that solves today's problems but leaves you unprepared for the next ones that arise. You've shot yourself in the foot. ... Organizations tend to buy different tools to solve different problems, when what they need is a data analytics platform that can apply analytics, machine learning, and data science to their data sets. That will provide the intelligence to make business decisions, whether that's to reduce risk or something else. Look for a tool, regardless of what it's called, that can solve the most problems for the least amount of money.


Cyber insurance isn’t always what it seems

Still, insurance is no silver bullet. Policies often come with limitations, high premiums, and strict requirements around security posture. “Insurers scrutinize security postures, enforce stringent requirements, and may deny claims if proper controls are not in place,” he said. Many policies also include exclusions and coverage gaps that add complexity to the decision. When used appropriately, cyber insurance plays a supporting role, not a leading one. “They should complement the defensive capabilities that focus on avoiding and minimizing loss,” Rosenquist said, serving as a safety net rather than a frontline defense. “Cyber insurance can provide important financial relief, but it should never be the first or only line of defense.” ... “Many businesses still believe they’re too small to be targeted, that cyber insurance is only for large companies, or that it’s too expensive. However, the reality is that over 60% of small businesses have been victims of cyberattacks, privacy breaches affect organizations of all sizes, and the cyber insurance market offers competitive, tailored options. Working with a skilled broker brings real value. They offer broad expertise and help build tailored solutions. With the proper guidance, organizations can create programs that address their specific risks and needs,“ explained Tijana Dusper, a licensed broker for insurance and reinsurance at InterOmnia.


RFID Hacking: Exploring Vulnerabilities, Testing Methods, and Protection Strategies

When an RFID reader scans an object, it emits a radio frequency (RF) signal that interacts with nearby RFID tags, potentially up to 1.14 million tags in a single area. The antenna on each tag absorbs this energy, powering the embedded microchip. The chip then encodes its stored data into a binary format (0s and 1s) and transmits it back to the RFID reader using reverse signal modulation. The collected data is then stored and processed, either for human interpretation or automated system operations. ... As with many wireless technologies, RFID technology adheres to certain standards and communication protocols. ... As RFID technology becomes increasingly embedded in everyday operations, from access control and inventory tracking to cashless payments, the risks associated with RFID hacking cannot be ignored. The same features that make RFID efficient and convenient, wireless communication and automatic identification, also make it vulnerable to cyber threats. RFID hacking techniques, such as cloning, skimming, eavesdropping, and relay attacks, allow cybercriminals to intercept sensitive information, manipulate access controls, or even exploit entire systems. Without proper security measures, businesses and individuals risk unauthorized data breaches, financial fraud, and identity theft.


How Organizational Rewiring Can Capture Value from Your AI Strategy

McKinsey’s research indicates that while AI use is accelerating dramatically (78% of organizations now use AI in at least one function, up from 55% a year ago), most organizations are still in early implementation stages. Only 1% of company executives describe their generative AI rollouts as "mature." For retail banking leaders, this reality check suggests both opportunity and urgency. The potential for competitive advantage remains substantial for early transformation leaders, but the window for gaining this advantage is narrowing as adoption accelerates. As McKinsey senior partner Alex Singla observes: "The organizations that are building a genuine and lasting competitive advantage from their AI efforts are the ones that are thinking in terms of wholesale transformative change that stands to alter their business models, cost structures, and revenue streams — rather than proceeding incrementally." For retail banking executives, this means embracing AI as a strategic imperative that requires rethinking fundamental business models, not merely implementing new technology tools. The most successful banking institutions will be those that undertake comprehensive organizational rewiring, driven by active C-suite leadership, clear strategic roadmaps, and a willingness to fundamentally redesign how they operate.


Securing AI at the Edge: Why Trusted Model Updates Are the Next Big Challenge

Edge AI is no longer experimental. It is running live in environments where failure is not an option. Environmental monitoring systems track air quality in realtime across urban areas. Predictive maintenance tools keep industrial equipment running smoothly. Smart traffic networks optimize vehicle flow in congested cities. Autonomous vehicles assist drivers with advanced safety features. Factory automation systems use AI to detect product defects on high-speed production lines. In all these scenarios, AI models must continuously evolve to meet changing demands. But every update carries risks, whether through technical failure, security breaches, or operational disruption. ... These challenges cannot be solved with isolated patches or last-minute fixes. Securing AI updates at the edge requires a fundamental rethink of the entire lifecycle. The update process from cloud-to-edge must be secure from start to finish. Models need protection from the moment they leave development until they are safely deployed. Authenticity must be guaranteed so that no malicious code can slip in. Access control must ensure that only authorized systems handle updates. And because no system is immune to failure, updates need built-in recovery mechanisms that minimize disruption.


Beyond the Black Box: Rethinking Data Centers for Sustainable Growth

To thrive under the growing pressure, the data center sector must rethink its relationship with the communities it enters. Instead of treating public engagement as an afterthought, what if the planning process started with people? Now, reimagine the development timeline. What if the public-facing engagement was prioritized from the very start? Imagine a data center operator purchasing a parcel of land for a new data center campus near a mid-sized city. Instead of presenting a fully formed plan months later, the client begins the conversation by asking the community: “How can we improve things while becoming your neighbor?” While commercial viability is essential, early engagement and collaboration can deliver positive outcomes without substantially increasing costs.  ... For data centers in urban environments where space is limited, the listen-first ethos still holds value. In these cases, the focus might shift to educational initiatives, such as training programs or partnerships with local schools and universities. Early public engagement ensures that urban projects align with the needs and priorities of residents while addressing their concerns. This inclusive approach benefits all stakeholders: for local authorities, it supports broader sustainability and net zero goals, and for communities, it delivers tangible benefits that clarify the data centre’s impact and value to the area.


Generative AI In Business: Managing Risks in The Race for Innovation

The issue is that businesses lack the appropriate processes, guidelines, or formal governance structures needed to regulate AI use, which, at the end of the day, makes them prone to accidental security breaches. In many instances, the culprits are employees who introduce GenAI systems on corporate devices with no understanding of the risks that come with it or their use even permitted based on the company’s existing data security and privacy guidelines. ... Never overestimate the power of employee education, which is essential in times when new innovations are far ahead of education. Put in place an educational program that delves into the risks of AI systems. Include training sessions that give people the tools they need to recognize red flags, such as suspicious AI-generated outputs or unusual system behaviors. In a world of AI-enabled threats, it’s important to empower employees to act as the first line of defense is essential. ... A preemptive approach that leverages tools such as Automated Moving Target Defense (AMTD) can help organizations stay ahead of attackers. By anticipating potential threats and implementing measures to address them before they occur, companies can reduce their vulnerability to AI-enabled exploits. This proactive stance is particularly important given the speed and adaptability of modern cyber threats.


How to Get a Delayed IT Project Back on Track

The best way to launch a project revival is to look backward. "Conduct a thorough project reassessment to identify the root causes of delays, then re-prioritize deliverables using a phased, agile-based approach," suggests Karan Kumar Ratra, an engineering leader at Walmart specializing in e-commerce technology, leadership, and innovation. "Start with high-impact, manageable milestones to restore momentum and stakeholder confidence," he advises in an online interview. "Clear communication, accountability, and aligning leadership with revised goals are critical." ... Recall past team members, yet supplement them with new members with similar skills and project experience, recommends Pundalika Shenoy, automation and modernization project manager at business consulting firm Smartbridge, via email. "Outside perspectives and expertise will help the team." While new team members should be welcomed, try to retain at least some past contributors to ensure project continuity, Rahming advises. Fresh ideas and insights may be what the legacy project needs to succeed but try to retain at least some past contributors to ensure project continuity, Rahming advises. "The new team members may well bring a sense of urgency, enthusiasm and skills ... that weren't present in the previous team at the time of the delay."


Daily Tech Digest - March 26, 2025


Quote for the day:

“The only true wisdom is knowing that you know nothing.” -- Socrates



The secret to using generative AI effectively

It’s a shift from the way we’re accustomed to thinking about these sorts of interactions, but it isn’t without precedent. When Google itself first launched, people often wanted to type questions at it — to spell out long, winding sentences. That wasn’t how to use the search engine most effectively, though. Google search queries needed to be stripped to the minimum number of words. GenAI is exactly the opposite. You need to give the AI as much detail as possible. If you start a new chat and type a single-sentence question, you’re not going to get a very deep or interesting response. To put it simply: You shouldn’t be prompting genAI like it’s still 2023. You aren’t performing a web search. You aren’t asking a question. Instead, you need to be thinking out loud. You need to iterate with a bit of back and forth. You need to provide a lot of detail, see what the system tells you — then pick out something that is interesting to you, drill down on that, and keep going. You are co-discovering things, in a sense. GenAI is best thought of as a brainstorming partner. Did it miss something? Tell it — maybe you’re missing something and it can surface it for you. The more you do this, the better the responses will get. ... Just be prepared for the fact that ChatGPT (or other tools) won’t give you a single streamlined answer. It will riff off what you said and give you something to think about. 


Rising attack exposure, threat sophistication spur interest in detection engineering

Detection engineering is about creating and implementing systems to identify potential security threats within an organization’s specific technology environment without drowning in false alarms. It’s about writing smart rules that can tell when something potentially suspicious or malicious is happening in an organization’s networks or systems and making sure those alerts are useful. The process typically involves threat modeling, understanding attacker TTPs, writing, testing and validating detection rules, and adapting detections based on new threats and attack techniques. ... Proponents argue that detection engineering differs from traditional threat detection practices in approach, methodology, and integration with the development lifecycle. Threat detection processes are typically more reactive and rely on pre-built rules and signatures from vendors that offer limited customization for the organizations using them. In contrast, detection engineering applies software development principles to create and maintain custom detection logic for an organization’s specific environment and threat landscape. Rather than relying on static, generic rules and known IOCs, the goal with detection engineering is to develop tailored mechanisms for detecting threats as they would actually manifest in an organization’s specific environment.


Fast and Furiant: Secrets of Effective Software Testing

Testing should always start as early as possible! It can begin as soon as a new functionality idea is proposed or discussed, during the mockup phase, or when requirements are first drafted. Early testing significantly helps me speed up the process. Even if development hasn’t started yet, you can still study the product areas that might be involved and familiarize yourself with new technologies or tools that could be helpful during testing. A good tester will never sit idle waiting for the perfect moment – they will always find something to work on before development begins! ... Effective testing begins with a well thought-out plan. Unfortunately, some testers postpone this stage until the functional testing phase. It’s important to define the priority areas for testing based on business requirements and areas where errors are most likely. The plan should include the types and levels of testing, as well as resource allocation. The plan can be formal or informal and doesn’t necessarily need to be submitted for reporting. ... Automation is the key to speeding up the testing process. It can begin even before or simultaneously with manual testing. If automation is well-implemented in the project with a clear purpose, process, and sufficient automated test coverage — it can significantly accelerate testing, aid in bug detection, provide a better understanding of product quality, and reduce the risk of human error.


The Core Pillars of Cyber Resiliency

The first pillar of a strong cybersecurity strategy is Offensive Security which focuses on a proactive approach to tackling vulnerabilities. Organisations must implement advanced monitoring systems that can provide real-time insights into network traffic, user behaviour, and system vulnerabilities. By establishing a comprehensive overview through visibility assessments, organisations can identify anomalies and potential threats before they escalate into full-blown attacks. Cyber hygiene refers to the practices and habits that users and organisations adopt to maintain the security of their digital environments. Passwords are typically the first line of defence against unauthorised access to systems, data and accounts. Attackers often obtain credentials due to password reuse or users inadvertently downloading infected software on corporate devices. ... Data is often regarded as the most valuable asset for any organisation. Effective data protection measures help organisations maintain the integrity and confidentiality of their information, even in the face of cyber threats. This includes implementing encryption for sensitive data, employing access controls to restrict unauthorised access, and deploying data loss prevention (DLP) solutions. Regular backups—both on-site and in the cloud—are critical for ensuring that data can be restored quickly in case of a breach or ransomware attack.


Cyber Risks Drive CISOs to Surf AI Hype Wave

Resilience, once viewed as an abstract concept, has gained practical significance under frameworks like DORA, which links people, processes and technology to tangible business outcomes. "Cybersecurity must align with the organization's goals, emphasizing its indispensable role in ensuring overall business success. While CISOs recognize cybersecurity's importance, many businesses still see it as a single line item in enterprise risk, overlooking its widespread implications," Gopal said. She said cybersecurity leaders must demonstrate to the business how cybersecurity affects areas such as financial risk, brand reputation and operational continuity. This requires CISOs to shift their focus from traditional protective measures to strategies that prioritize rapid response and recovery. This shift, evident in evolving frameworks, underscores the importance of adaptability in cybersecurity strategies. ... Gartner analysts said CISOs play a crucial role in balancing innovation's rewards and risks by guiding intelligent risk-taking. They must foster a culture of intelligent risk-taking by enabling people to make intelligent decisions. "Transformation and resilience themes dominate cybersecurity trends, with a focus on empowering people to make intelligent risk decisions and enabling businesses to address challenges effectively. 


How Infrastructure-As-Code Is Revolutionizing Cloud Disaster Recovery

Infrastructure-as-Code allows organizations to manage and provision their cloud infrastructure through programmable code, significantly reducing manual processes and associated risks. Yemini pointed out that IaC's standardization across the industry simplifies recovery efforts because teams already possess the necessary expertise. With IaC, cloud infrastructure recovery becomes quicker, more reliable, and integrated directly into existing codebases, streamlining restoration and minimizing downtime. ... The shift toward automation in disaster recovery empowers organizations to move from reactive recovery to proactive resilience. ControlMonkey launched its Automated Disaster Recovery solution to restore the entire cloud infrastructure as opposed to just the data. Automation substantially reduces recovery times—by as much as 90% in some scenarios—thereby minimizing business downtime and operational disruptions. ... Shifting from data-focused recovery strategies to comprehensive infrastructure automation enhances overall cloud resilience. Twizer highlighted that adopting a holistic approach ensures the entire cloud environment—network configurations, permissions, and compute resources—is recoverable swiftly and accurately. Yet, Yemini identifies visibility and configuration drift as key challenges. 


A CISO’s guide to securing AI models

Unlike traditional IT applications, which rely on predefined rules and static algorithms, ML models are dynamic—they develop their own internal patterns and decision-making processes by analyzing training data. Their behavior can change as they learn from new data. This adaptive nature introduces unique security challenges. Securing these models requires a new approach that not only addresses traditional IT security concerns, like data integrity and access control, but also focuses on protecting the models’ training, inference, and decision-making processes from tampering. To prevent these risks, a robust approach to model deployment and continuous monitoring known as Machine Learning Security Operations (MLSecOps) is required. ... To safeguard ML models from emerging threats, CISOs should implement a comprehensive and proactive approach that integrates security from their release to ongoing operation. ... Implementing security measures at each stage of the ML lifecycle—from development to deployment—requires a comprehensive strategy. MLSecOps makes it possible to integrate security directly into AI/ML pipelines for continuous monitoring, proactive threat detection, and resilient deployment practices. 


From Human to Machines: Redefining Identity Security in the Age of Automation

In the past, identity security was primarily concentrated on human users- employees, substitute workers, and collaborators – who could log into the systems of the company. There was a level of  implementation that incorporated password policy, multi-factor authentication, and access review after a defined period to ensure protection of identity. With the faster pace of automation, this approach is increasingly insufficient. There is a significant rise in identity with devices being routed through cloud workloads, API’s, automation scripts, and IoT, creating an unimaginable security gap that these non-human entities are now regarded as the riskiest identity type. This also does not provide a lot of hope regarding these human characteristics of the so-called automated devices. ... In the next 12 months, identity populations are projected to triple, making it more difficult for Indian organisations to depend on manual identity processes. Automation platforms have the capability to analyse behavioral patterns and implement privileged access control and mitigation in real time, all of which are essential for modern infrastructure management. An integrated approach that recognises the various forms of identities is more effective than the old, fragmented approach to identity security.


Sustainable Development: Balancing Innovation With Longevity

For platforms, the Twelve-Factor principles provide a blueprint for building scalable, maintainable and portable applications. By adhering to these principles, platforms can ensure that applications deployed on them are well-structured, easy to manage and can be scaled up or down as needed. The principles promote a clear separation of concerns, making it easier to update and maintain the platform and the applications running on it. This translates to increased agility, reduced risk and improved overall sustainability of the platform and the software ecosystem it supports. Adapting Twelve-Factor for modern architectures requires careful consideration of containerization, orchestration and serverless technologies. ... Sustainable software development is not just a technical discipline; it’s a mindset. It requires a commitment to building systems that are not only functional but also maintainable, scalable and adaptable. By embracing these principles and practices, developers and organizations can create software that delivers value over the long term, balancing the need for innovation with the imperative of longevity. Focus on building a culture that values quality and maintainability, and invest in the tools and processes that support sustainable software development. 


Four Criteria for Creating and Maintaining ‘FLOW’ in Architectures

Vertical alignment is required to transport information within the different layers of the architecture – it needs to move through all areas of the organization and, be stored for future reference. The movement of information is usually achieved through API integration or file sharing. The design of seamless data-sharing activities can be complicated where data structure and stature are not formally managed ... The current trends of using SaaS solutions and moving to the cloud have made the technology landscape’s maintenance and risk management extremely difficult. There is no complete control over the performance of the end-to-end landscape. Any of the parties can change their solutions at any point, and those changes can have various impacts – which can be tested if known but which often slip in under the radar. ... Businesses must survive in very competitive environments and, therefore, need to frequently update their business models and, operating models (people and process structures). Ideally, updates would be planned according to a well-defined strategy – serving as the focus for transformation. However, in today’s agile world, these change requirements originate mainly from short term goals with poorly defined requirements , enabled via hot-fix solutions – the long-term impact of such behaviour should be known to all architects.


Daily Tech Digest - March 25, 2025


Quote for the day:

“Only put off until tomorrow what you are willing to die having left undone.” -- Pablo Picasso


Why FinOps Belongs in Your CI/CD Workflow

By codifying FinOps governance policies, teams can put guardrails in place while still granting developers autonomy to create resources. Guardrails don’t stifle innovation — they’re simply there to prevent costly mistakes. Every engineer makes mistakes, but guardrails ensure that those mistakes don’t lead to $10K-per-day cloud bills due to an overlooked database instance in a Terraform template taken off of GitHub. Additionally, policy enforcement must be dynamic and flexible, allowing organizations to adjust tagging, cost constraints and security requirements as they evolve. AI-driven governance can scale policy enforcement by identifying repeatable patterns and automating compliance checks across environments. ... Shifting left in FinOps isn’t just about cost visibility — it’s about ensuring cost efficiency is enforced as code, and continuously on your production systems. Legacy cost analysis tools provide visibility into cloud spending but rarely offer actionable cleanup recommendations. This includes actionable insights for cloud waste reduction, ensuring that predefined cost-saving policies highlight underutilized or orphaned resources while automated cleanup workflows help reclaim unused infrastructure.


How AI is changing cybersecurity for better or worse

“Agentic AI, capable of independently planning and acting to achieve specific goals, will be exploited by threat actors,” Lohrmann says. “These AI agents can automate cyberattacks, reconnaissance and exploitation, increasing attack speed and precision.” Malicious AI agents might adapt in real-time, bypassing traditional defenses and enhancing the complexity of attacks, Lohrmann says. AI-driven scams and social engineering will surge, Lohrmann says. “AI will enhance scams like ‘pig butchering’ — long-term financial fraud — and voice phishing, making social engineering attacks harder to detect,” he says. ... AI can also benefit organizations’ cybersecurity programs. “In general, AI-enabled platforms can provide a more robust, technology-backed line of defense against threat actors,” Cullen says. “Because AI can process huge amounts of data, it can provide faster and less obvious alerts to these threats.” Cybersecurity teams need to “fight fire with fire” by detecting and stopping threats with AI tool sets, Lohrmann says. For example, with new AI-enabled tools employee actions such as inappropriate clicking on links, sending emails to the wrong people, and other policy violations can be detected and stopped before a breach occurs.


Learning AI governance lessons from SaaS and Web2

Autonomous systems are advancing quickly, with the emergence of agents capable of communicating with each other, executing complex tasks, and interacting directly with stakeholders developing. While these autonomous systems introduce exciting new use cases, they also create substantial challenges. For example, an AI agent automating customer refunds might interact with financial systems, log reason codes for trends analysis, monitor transactions for anomalies, and ensure compliance with company and regulatory policies — all while navigating potential risks like fraud or misuse. ... Early SaaS and Web2 companies often relied on reactive strategies to address governance issues as they emerged, adopting a “wait and see” approach. SaaS companies focused on basics like release sign-offs, access controls, and encryption, while Web2 platforms struggled with user privacy, content moderation, and data misuse. This reactive approach was costly and inefficient. SaaS applications scaled with manual processes for user access management and threat detection that strained resources. ... A continuous, automated approach is the key to effective AI governance. By embedding tools that enable these features into their operations, companies can proactively address reputational, financial, and legal risks while adapting to evolving compliance demands.


7 types of tech debt that could cripple your business

As a software developer, writing code feels easier than reviewing someone else’s and understanding how to use it. Searching and integrating open source libraries and components can be even easier, as the weight of long-term support isn’t at the top of many developers’ minds when they are pressured to meet deadlines and deploy frequently. ... “The average app contains 180 components, and failing to update them leads to bloated code, security gaps, and mounting technical debt. Just as no one wants to run mission-critical systems on decade-old hardware, modern SDLC and DevOps practices must treat software dependencies the same way — keep them updated, streamlined, and secure.” ... CIOs with sprawling architectures should consider simplifications and one step to establish architectural observability practices. These include creating architecture and platform performance indicators by aggregating application-level monitoring, observability, code quality, total costs, DevOps cycle times, and incident metrics as a tool to evaluate where architecture impacts business operations. ... Joe Byrne, field CTO of LaunchDarkly, says, “Cultural debt can have several negative impacts, but specific to AI, a lack of proper engineering practices, resistance to innovation, tribal knowledge gaps, and failure to adopt modern practices all create significant roadblocks to successfully leveraging AI.”


Why people are the key to successful cloud migration

The consequences of overlooking the human element are significant. According to McKinsey’s research, European companies are five times more likely than their US counterparts to pursue an IT-led cloud migration, focusing primarily on ‘lifting and shifting’ existing workloads rather than transforming how people work. This approach might explain why many organisations are seeing limited returns on their investment. Migration creates a good opportunity to review methods and processes while ensuring teams have the tools they need to work efficiently. both human impact and technological enablement, even the most technically sound migration can fail to deliver the desired results. ... The true value of cloud transformation extends far beyond technical metrics and cost savings. Organisations need to track employee satisfaction and engagement levels alongside traditional technical key performance indicators (KPIs). This includes monitoring adoption rates of new tools, time saved through improved processes, and skill development achievements. Business impact measures should encompass customer satisfaction, process efficiency improvements, and innovation metrics. Long-term value indicators such as employee retention rates, internal mobility, and team productivity provide a more complete picture of transformation success. 


Evolving Technology and Corporate Culture Toward Autonomous IT and Agentic AI

Corporate culture will shape how seamlessly and effectively the modernization effort toward a more autonomous and intelligent enterprise operation will unfold. The best approaches align technology and culture along a structured journey model — assessing both the IT and workforce needs around data maturity, process automation, AI readiness, and success metrics. Such efforts can quickly propel organizations toward the largely self-sustaining capabilities and ecosystem of Agentic AI and autonomic IT. As IT teams become more comfortable relying on AI, machine learning, predictive analytics, and automation, they can begin to turn their attention to unlocking the power of Agentic AI. The term refers to advanced scenarios where machine and human resources blend to create an AI assistant capable of delivering accurate predictions, tailored recommendations, and intelligent automations that drive business efficiency and innovation. Such systems leverage generative AI and unsupervised ML combined with human-in-the-loop automation training models to revolutionize IT operations. Relinquishing the responsibility of mundane, repetitive tasks, IT teams can begin to reap the benefits of autonomic IT — a seamlessly integrated ecosystem of advanced technologies designed to enhance IT operations.


Building a Data Governance Strategy

In implementing a data strategy, a company can face several obstacles, including:Cultural resistance: Cultural resistance emerges throughout the DG journey, from initial strategy discussions through implementation and beyond. Teams and departments may resist changes to their established processes and workflows, requiring sustained change management efforts and clear communication of benefits. Lack of Resources: Viewing governance solely through a compliance lens leads to underinvestment, with 54% of data and analytics professionals finding the biggest hurdle is a lack of funding for their data programs. In the meantime, the demands of data governance have increased significantly due to a complex and evolving regulatory landscape and accelerated digital transformation where businesses must rely heavily on data-driven systems. Scalability: Modern enterprises must manage data across an increasingly complex ecosystem of cloud platforms, personal devices, and decentralized systems. This dispersed data environment creates significant challenges for maintaining consistent governance practices and data quality. Demands for unstructured data: The growing demand for AI-driven insights requires organizations to govern increasing volumes of unstructured data, including videos, emails, documents, and images. 


How CISOs can meet the demands of new privacy regulations

The responsibility for implementing and documenting privacy controls and policies falls primarily on the shoulders of the CISO, who must ensure that the organization’s procedures for managing information protects privacy data and meets regulatory requirements. Performing risk assessments that identify weaknesses and demonstrate that they are being addressed is a crucial step in the process, even more so now that they must be ready to produce risk assessments whenever regulatory bodies request them. As if CISOs needed an added incentive, regulators at the state and federal levels have been trending toward targeting organization management, particularly CISOs, in the wake of costly breaches. The consequences include hefty fines for organizations and, in worst-case scenarios, even jail sentences for CISOs. Responsibility for privacy protections also extends to third-party risks. Organizations can’t afford to rely solely on promises made by third-party providers because regulators and state attorneys generally can hold an organization responsible for a breach, even if the exploited vulnerability belonged to a provider. Organizations need to implement a framework for third-party risk management that includes performing due diligence on the security postures of third parties.


Guess Who’s Hiding in Your Supply Chain

There are plenty of high-profile attacks that demonstrate how hackers use the supply chain to access their target organisation. One of the most notable attacks on a supply chain was on SolarWinds, where hackers deployed malicious code into its IT monitoring and management software, enabling them to reach other companies within the supply chain. Once hackers were inside, they were able to compromise data, networks and systems of thousands of public and private organisations. This included spying on government agencies, in what became a major breach to national security. Government departments noticed that sensitive emails were missing from their systems and major private companies such as Microsoft, Intel, and Deloitte were also affected. With internal workings exposed, hackers could also gain access to data and networks of customers and partners of those originally affected, allowing the attack to spiral in impact and affect thousands of organisations. Visibility is key to guard against future attacks – without it an organisation can’t effectively or reliably identify suspicious activity. ... When you put this into perspective, it becomes unfathomable the amount of damage a cyber intruder could cause. Security teams must deploy a multi-layered arsenal of tools and tactics to cover their bases and should provision identities with only as much access as is absolutely necessary.


11 ways cybercriminals are making phishing more potent than ever

Brand impersonation continues to be a favored method to trick users into opening a malicious file or entering their details on a phishing site. Threat actors typically impersonate major brands, including document sharing platforms such as Microsoft’s OneDrive and SharePoint, and, increasingly frequently, DocuSign. Attackers exploit employees’ inherent trust in commonly used applications by spoofing their branding before tricking recipients into entering credentials or approving fraudulent document requests. ... Another significant phishing evolution involves abusing trusted services and content delivery platforms. Attackers are increasingly using legitimate document-signing and file-hosting services to distribute phishing lures. They first upload malicious content to a reputable provider, then craft phishing emails or messages that reference these trusted services and content delivery platforms. “Since these services host the attacker’s content, vigilant users who check URLs before clicking may still be misled, as the links appear to belong to legitimate and well-known platforms,” warns Greg ... Image-based phishing is becoming more complex. For example, fraudsters are crafting images to look like a text-based emails to improve their apparent authenticity, while still bypassing conventional email filters.


Daily Tech Digest - March 24, 2025


Quote for the day:

"To be an enduring, great company, you have to build a mechanism for preventing or solving problems that will long outlast any one individual leader" -- Howard Schultz



Identity Authentication: How Blockchain Puts Users In Control

One key benefit of blockchain is that it's decentralized. Instead of a single database that records user information -- one ripe for data breaches -- blockchain uses something called decentralized identifiers (DIDs). DIDs are cryptographic key pairs that allow users to have more control over their online identities. They are becoming more popular, with Forbes claiming they're the future of online identity. To explain what DIDs are, let's start by explaining what they are not. Today, most people interact online via a centralized identifier, such as an email address, username or password. This allows the database to store your digital information on that platform. But single databases are more vulnerable to data breaches and users have no control over their data. When we use centralized platforms, we really hand over all our trust to whatever platform we use. DIDs provide a new way to access information while allowing users to maintain ownership. ... That said, identity authentication and blockchain technology don't have to be complex topics. They can be easy to use but require intuitive platforms and simple user experiences. The EU's digital policies offer a strong foundation for integrating blockchain. If blockchain becomes part of the initial rulemaking, it could fuel more widespread adoption. There's a long way to go before people feel confident understanding concepts like DIDs. 


Cloud providers aren’t delivering on security promises

With 44% of businesses already spending between £101,000 and £250,000 on cloud migrations in the past 12 months there is a clear need for organizations to ensure they are working with trusted partners who can meet this security need. Otherwise, companies will run the risk of having to spend more to not only move to new suppliers but also respond to the cost of a data breach. The cost and resources needed for organizations to boost their own security skills and technology is often too prohibitive. ... However, despite the clear advantages to security and job stability, only 22% of CISOs use a channel partner in their cloud migration process. This is leaving many exposed to unnecessary risk from attacks or job loss. “It is clear that many organizations are struggling when it comes to securing cloud environments. A combination of underdelivering cloud providers and a lack of in-house skills is resulting in a dangerous situation which can leave valuable company data exposed to risk. Simply adding more technology will not solve this problem,” said Clare Loveridge, VP and GM EMEA at Arctic Wolf. “Securing the cloud is a shared responsibility between the cloud provider and the organization. While cloud providers offer good security tools it is important that you have a team of security experts to help you run the operation. 


CISOs are taking on ever more responsibilities and functional roles – has it gone too far?

“The CISO role has expanded significantly over the years as companies realize that information security has a unique picture of what is going on across the organization,” says Doug Kersten, CISO of software company Appfire. “Traditionally, CISOs have focused on fundamental security controls and threat mitigation,” he adds. “However, today they are increasingly expected to play a central role in maintaining business resilience and compliance. Many CISOs are now responsible for risk management, business continuity, and disaster recovery as well as overseeing regulatory compliance across various jurisdictions.” ... “We’re seeing a convergence of roles under head of security because of the background and problem-solving skills of these people. They have become problem-solver in chief,” says Steve Martano, IANS Research faculty and executive cyber recruiter at Artico Search. That, though, comes with challenges. “CISOs are already experiencing high levels of stress, with recent data highlighting that nearly one in four CISOs are considering leaving the profession due to stress,” Kersten says. “Many CISOs only stay in the role for two to three years. With this, the expectations placed on CISOs are undeniably growing, and organizations risk overburdening them without sufficient resources and support. ..."


Fixing the Fixing Process: Why Automation is Key to Cybersecurity Resilience

Cybersecurity environments have seen nonstop evolution, driven by increasingly sophisticated attack techniques, the expansion of complex cloud-native architecture, and the rise of AI-powered threats that outpace traditional defense strategies. At the same time, development timelines have accelerated, pushing security teams to keep pace without becoming a bottleneck. ... It’s a daunting and intimidating task that requires sufficient time and attention. Moreover, adopting automation means ensuring that security and development teams trust the outputs. Many organizations struggle with this transition because automation tools, if not properly configured, can generate inaccuracies or miss critical context. Security teams fear losing control over decision-making, while developers worry about receiving even more noise if automation isn’t fine-tuned. ... Attackers are already leveraging AI to exploit vulnerabilities rapidly, while security teams often rely on static and manual processes that have no chance of keeping up. AI-enabled EAPs help teams proactively identify and mitigate vulnerabilities before adversaries can exploit them. By automating exposure assessments, organizations can shrink the reconnaissance window available to attackers, limiting their ability to target common vulnerabilities and exposures (CVEs), security misconfigurations, software flaws, and other weaknesses. 


Can we make AI less power-hungry? These researchers are working on it.

Two key drivers of that efficiency were the increasing adoption of GPU-based computing and improvements in the energy efficiency of those GPUs. “That was really core to why Nvidia was born. We paired CPUs with accelerators to drive the efficiency onward,” said Dion Harris, head of Data Center Product Marketing at Nvidia. In the 2010–2020 period, Nvidia data center chips became roughly 15 times more efficient, which was enough to keep data center power consumption steady. ... The increasing power consumption has pushed the computer science community to think about how to keep memory and computing requirements down without sacrificing performance too much. “One way to go about it is reducing the amount of computation,” said Jae-Won Chung, a researcher at the University of Michigan and a member of the ML Energy Initiative. One of the first things researchers tried was a technique called pruning, which aimed to reduce the number of parameters. Yann LeCun, now the chief AI scientist at Meta, proposed this approach back in 1989, terming it (somewhat menacingly) “the optimal brain damage.” You take a trained model and remove some of its parameters, usually targeting the ones with a value of zero, which add nothing to the overall performance. 


Five Years of Cloud Innovation: 2020 to 2025

The FinOps organization and the implementation of FinOps standards across cloud providers has been the most impactful development over the last five years, states Allen Brokken, head of customer engineering at Google, in an online interview. This has fundamentally transformed how organizations understand the business value of their cloud deployments, he states. "Standardization has enabled better comparisons between cloud providers and created a common language for technical teams, business unit owners, and CFOs to discuss cloud operations." ... The public cloud has democratized access to technology and increased accessibility for organizations across industries that have faced intense volatility and change in the past five years, Adams observes via email. "This innovation has facilitated a new level of co-innovation and enabled new business models that allow companies to realize future opportunities with ease." Public cloud platforms offer adopters immense benefits, Adams says. "With the public cloud, businesses can scale IT infrastructure on-demand without significant upfront investment." This flexibility comes with a reduced total cost of ownership, since public cloud solutions often lead to lower costs for hardware, software and maintenance. 


Cloud, colocation or on-premise? Consider all options

Following the rush to the cloud, the cost implications should have prompted some companies to move back to on-premise, but it hasn’t, according to Lamb. “I thought it might happen with AI, because potentially the core per hour rate for AI is going to be far higher, but it hasn’t.” Lamb’s advice for CIOs is to be wary of being tied into particular providers or AI models, noting that Microsoft is creating models and not charging for them, knowing that companies will still be paying for the compute to use them. Lamb also says that, whether we’re talking on-premise, colocation or cloud, the potential for retrofitting existing capacity is limited, at least when it comes to capacity aimed at AI. After all, those GPUs often require liquid cooling to the chip. This changes the infrastructure equation, says Lamb, increasing the footprint for cooling infrastructure in comparison to compute. Quite apart from the real estate impact, this isn’t something most enterprises will want to tackle. Also, cooling and power will only become more complicated. Andrew Bradner, Schnieder Electric’s general manager for cooling, is confident that many sectors will continue to operate on-premise datacentre capacity – life sciences, fintech and financial, for example. 


How GenAI is Changing Work by Supporting, Not Replacing People

A common misconception is that AI adoption leads to workforce reduction. While automation has historically replaced repetitive, manual labor, the rise of GenAI is fundamentally different. Unlike traditional automation, which replaces human effort, GenAI amplifies human potential by reducing workload friction. The same science study reinforces this point: AI doesn’t just increase speed; it also improves work quality. Employees using AI-powered tools experienced a 40% reduction in task completion time and an 18% improvement in output quality, demonstrating that AI is an efficiency enabler rather than a job replacer. Consider the historical trend: The Industrial Revolution automated factory work but also created entirely new job categories and industries. Similarly, the digital revolution reduced the need for clerical roles yet generated millions of jobs in software development, cybersecurity, and IT infrastructure. ... Biases in machine learning models are still an issue since AI based on data from the past will perpetuate prevailing biases, and thus human monitoring is critical. GenAI can also generate misleading or inaccurate results, further highlighting the need for oversight. AI can generate reports, but it cannot negotiate deals, understand organizational culture, or make leadership decisions. 


Frankenstein Fraud: How to Protect Yourself Against Synthetic Identity Fraud

Synthetic identity fraud is an exercise in patience, at least on the criminal's part, especially if they're using the Social Security number of a child. The identity is constructed by using a real Social Security number in combination with an unassociated name, address, date of birth, phone number or other piece of identifying information to create a new "whole" identity. Criminals can purchase SSNs on the dark web, steal them from data breaches or con them from people through things like phishing attacks and other scams. Synthetic identity theft flourishes because of a simple flaw in the US financial and credit system. When the criminal uses the synthetic identity to apply to borrow from a lender, it's typically denied credit because there's no record of that identity in their system. The thieves are expecting this since children and teens may have no credit or a thin history, and elderly individuals may have poor credit scores. Once an identity applies for an account and is presented to a credit bureau, it's shared with other credit bureaus. That act is enough to allow credit bureaus to recognize the synthetic identity as a real person, even if there's little activity or evidence to support that it's a real person. Once the identity is established, the fraudsters can start borrowing credit from lenders.


Will AI erode IT talent pipelines?

“The pervasive belief that gen AI is an automation technology, that gen AI increases productivity by automation, is a huge fallacy,” says Suda, though he admits it will eliminate the need for certain skills — including IT skills. “Losing skills is fine,” he says, adding that machines have been eliminating the need for certain skills for centuries. “What gen AI is helping us do is learn new skills and learn new things, and that does create an impact on the workforce. “What it is eroding is the opportunity for junior IT staff to have the same experiences that junior staff have today or yesterday,” he says. “Therefore, there’s an erosion of yesterday’s talent pipeline. Yesterday’s talent pipeline is changing, and the steps to get through it are changing from what we have today to what we need [in the future].” Steven Kirz, senior partner for operations excellence at consulting firm West Monroe, shares similar insights. Like Suda, Kirz says AI doesn’t “universally make everybody more productive. It’s unequal across roles and activities.” Kirz also says both research and anecdotal evidence show that AI is replacing lower-level, mundane, and repetitive tasks. In IT, that tends to be reporting, clerical, data entry, and administrative activities. “And routine roles being replaced [by technology] doesn’t feel new to me,” he adds.


Daily Tech Digest - March 23, 2025


Quote for the day:

"Law of Leadership: A successful team with 100 members has 100 leaders." -- Lance Secretan


Citizen Development: The Wrong Strategy for the Right Problem

The latest generation of citizen development offenders are the low-code and no-code platforms that promise to democratize software development by enabling those without formal programming education to build applications. These platforms fueled enthusiasm around speedy app development — especially among business users — but their limitations are similar to the generations of platforms that came before. ... Don't get me wrong — the intentions behind citizen development come from a legitimate place. More often than not, IT needs to deliver faster to keep up with the business. But these tools promise more than they can deliver and, worse, usually result in negative unintended consequences. Think of it as a digital house of cards, where disparate apps combine to create unscalable systems that can take years and/or millions of dollars to fix. ... Struggling to keep up with business demands is a common refrain for IT teams. Citizen development has attempted to bridge the gap, but it typically creates more problems than solutions. Rather than relying on workarounds and quick fixes that potentially introduce security risks and inefficiency — and certainly rather than disintermediating IT — businesses should embrace the power of GenAI to support their developers and ultimately to make IT more responsive and capable.


Researchers Test a Blockchain That Only Quantum Computers Can Mine

The quantum blockchain presents a path forward for reducing the environmental cost of digital currencies. It also provides a practical incentive for deploying early quantum computers, even before they become fully fault-tolerant or scalable. In this architecture, the cost of quantum computing — not electricity — becomes the bottleneck. That could shift mining centers away from regions with cheap energy and toward countries or institutions with advanced quantum computing infrastructure. The researchers also argue that this architecture offers broader lessons. ... “Beyond serving as a proof of concept for a meaningful application of quantum computing, this work highlights the potential for other near-term quantum computing applications using existing technology,” the researchers write. ... One of the major limitations, as mentioned, is cost. Quantum computing time remains expensive and limited in availability, even as energy use is reduced. At present, quantum PoQ may not be economically viable for large-scale deployment. As progress continues in quantum computing, those costs may be mitigated, the researchers suggest. D-Wave machines also use quantum annealing — a different model from the quantum computing platforms pursued by companies like IBM and Google. 


Enterprise Risk Management: How to Build a Comprehensive Framework

Risk objects are the human capital, physical assets, documents and concepts (e.g., “outsourcing”) that pose risk to an organization. Stephen Hilgartner, a Cornell University professor, once described risk objects as “sources of danger” or “things that pose hazards.” The basic idea is that any simple action, like driving a car, has associated risk objects – such as the driver, the car and the roads. ... After the risk objects have been defined, the risk management processes of identification, assessment and treatment can begin. The goal of ERM is to develop a standardized system that not only acknowledges the risks and opportunities in every risk object but also assesses how the risks can impact decision-making. For every risk object, hazards and opportunities must be acknowledged by the risk owner. Risk owners are the individuals managerially accountable for the risk objects. These leaders and their risk objects establish a scope for the risk management process. Moreover, they ensure that all risks are properly managed based on approved risk management policies. To complete all aspects of the risk management process, risk owners must guarantee that risks are accurately tied to the budget and organizational strategy.


Choosing consequence-based cyber risk management to prioritize impact over probability, redefine industrial security

Nonetheless, the biggest challenge for applying consequence-based cyber risk management is the availability of holistic information regarding cyber events and their outcomes. Most companies struggle to gauge the probable damage of attacks based on inadequate historical data or broken-down information systems. This has led to increased adoption of analytics and threat intelligence technologies to enable organizations to simulate the ‘most likely’ outcome of cyber-attacks and predict probable situations. ... “A winning strategy incorporates prevention and recovery. Proactive steps like vulnerability assessments, threat hunting, and continuous monitoring reduce the likelihood and impact of incidents,” according to Morris. “Organizations can quickly restore operations when incidents occur with robust incident response plans, disaster recovery strategies, and regular simulation exercises. This dual approach is essential, especially amid rising state-sponsored cyberattacks.” ... “To overcome data limitations, organizations can combine diverse data sources, historical incident records, threat intelligence feeds, industry benchmarks, and expert insights, to build a well-rounded picture,” Morris detailed. “Scenario analysis and qualitative assessments help fill in gaps when quantitative data is sparse. Engaging cross-functional teams for continuous feedback ensures these models evolve with real-world insights.”


The CTO vs. CMO AI power struggle - who should really be in charge?

An argument can be made that the CTO should oversee everything technical, including AI. Your CTO is already responsible for your company's technology infrastructure, data security, and system reliability, and AI directly impacts all these areas. But does that mean the CTO should dictate what AI tools your creative team uses? Does the CTO understand the fundamentals of what makes good content or the company's marketing objectives? That sounds more like a job for your creative team or your CMO. On the other hand, your CMO handles everything from brand positioning and revenue growth to customer experiences. But does that mean they should decide what AI tools are used for coding or managing company-wide processes or even integrating company data? You see the problem, right? ... Once a tool is chosen, our CTO will step in. They perform their due diligence to ensure our data stays secure, confidential information isn't leaked, and none of our secrets end up on the dark web. That said, if your organization is large enough to need a dedicated Chief AI Officer (CAIO), their role shouldn't be deciding AI tools for everyone. Instead, they're a mediator who connects the dots between teams. 


Why Cyber Quality Is the Key to Security

To improve security, organizations must adopt foundational principles and assemble teams accountable for monitoring safety concerns. Cyber resilience and cyber quality are two pillars that every institution — especially at-risk ones — must embrace. ... Do we have a clear and tested cyber resilience plan to reduce the risk and impact of cyber threats to our business-critical operations? Is there a designated team or individual focused on cyber resilience and cyber quality? Are we focusing on long-term strategies, targeted at sustainable and proactive solutions? If the answer to any of these questions is no, something needs to change. This is where cyber quality comes in. Cyber quality is about prioritization and sustainable long-term strategy for cyber resilience, and is focused on proactive/preventative measures to ensure risk mitigation. This principle is not a marked checkbox on controls that show very little value in the long run. ... Technology alone doesn't solve cybersecurity problems — people are the root of both the challenges and the solutions. By embedding cyber quality into the core of your operations, you transform cybersecurity from a reactive cost center into a proactive enabler of business success. Organizations that prioritize resilience and proactive governance will not only mitigate risks but thrive in the digital age. 


ISO 27001: Achieving data security standards for data centers

Achieving ISO 27001 certification is not an overnight process. It’s a journey that requires commitment, resources, and a structured approach in order to align the organization’s information security practices with the standard’s requirements. The first step in the process is conducting a comprehensive risk assessment. This assessment involves identifying potential security risks and vulnerabilities in the data center’s infrastructure and understanding the impact these risks might have on business operations. This forms the foundation for the ISMS and determines which security controls are necessary. ... A crucial, yet often overlooked, aspect of ISO 27001 compliance is the proper destruction of data. Data centers are responsible for managing vast amounts of sensitive information and ensuring that data is securely sanitized when it is no longer needed is a critical component of maintaining information security. Improper data disposal can lead to serious security risks, including unauthorized access to confidential information and data breaches. ... Whether it's personal information, financial records, intellectual property, or any other type of sensitive data, the potential risks of improper disposal are too great to ignore. Data breaches and unauthorized access can result in significant financial loss, legal liabilities, and reputational damage.


Understanding code smells and how refactoring can help

Typically, code smells stem from a failure to write source code in accordance with necessary standards. In other cases, it means that the documentation required to clearly define the project's development standards and expectations was incomplete, inaccurate or nonexistent. There are many situations that can cause code smells, such as improper dependencies between modules, an incorrect assignment of methods to classes or needless duplication of code segments. Code that is particularly smelly can eventually cause profound performance problems and make business-critical applications difficult to maintain. It's possible that the source of a code smell may cause cascading issues and failures over time. ... The best time to refactor code is before adding updates or new features to an application. It is good practice to clean up existing code before programmers add any new code. Another good time to refactor code is after a team has deployed code into production. After all, developers have more time than usual to clean up code before they're assigned a new task or a project. One caveat to refactoring is that teams must make sure there is complete test coverage before refactoring an application's code. Otherwise, the refactoring process could simply restructure broken pieces of the application for no gain. 


Handling Crisis: Failure, Resilience And Customer Communication

Failure is something leaders want to reduce as much as they can, and it’s possible to design products with graceful failure in mind. It’s also called graceful degradation and can be thought of as a tolerance to faults or faulting. It can mean that core functions remain usable as parts or connectivity fails. You want any failure to cause as little damage or lack of service as possible. Think of it as a stopover on the way to failing safely: When our plane engines fail, we want them to glide, not plummet. ... Resilience requires being on top of it all: monitoring, visibility, analysis and meeting and exceeding the SLAs your customers demand. For service providers, particularly in tech, you can focus on a full suite of telemetry from the operational side of the business and decide your KPIs and OKRs. You can also look at your customers’ perceptions via churn rate, customer lifetime value, Net Promoter Score and so on. ... If you are to cope with the speed and scale of potential technical outages, this is essential. Accuracy, then speed, should be your priorities when it comes to communicating about outages. The more of both, the better, but accuracy is the most important, as it allows customers to make informed choices as they manage the impact on their own businesses.


Approaches to Reducing Technical Debt in Growing Projects

Technical debt, also known as “tech debt,” refers to the extra work developers incur by taking shortcuts or delaying necessary code improvements during software development. Though sometimes these shortcuts serve a short-term goal — like meeting a tight release deadline — accumulating too many compromises often results in buggy code, fragile systems, and rising maintenance costs. ... Massive rewrites can be risky and time-consuming, potentially halting your roadmap. Incremental refactoring offers an alternative: focus on high-priority areas first, systematically refining the codebase without interrupting ongoing user access or new feature development. ... Not all parts of your application contribute to technical debt equally. Concentrate on elements tied directly to core functionality or user satisfaction, such as payment gateways or account management modules. Use metrics like defect density or customer support logs to identify “hotspots” that accumulate excessive technical debt. ... Technical debt often creeps in when teams skip documentation, unit tests, or code reviews to meet deadlines. A clear “definition of done” helps ensure every feature meets quality standards before it’s marked complete.