Daily Tech Digest - March 03, 2025


Quote for the day:

“If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work.” -- Thomas J. Watson




How to Create a Winning AI Strategy

“A winning AI strategy starts with a clear vision of what problems you’re solving and why,” says Surace. “It aligns AI initiatives with business goals, ensuring every project delivers measurable value. And it builds in agility, allowing the organization to adapt as technology and market conditions evolve.” ... AI is also not a solution to all problems. Like any other technology, it’s simply a tool that needs to be understood and managed. “Proper AI strategy adoption will require iteration, experimentation, and, inevitably, failure to end up at real solutions that move the needle. This is a process that will require a lot of patience,” says Lionbridge’s Rowlands-Rees. “[E]veryone in the organization needs to understand and buy in to the fact that AI is not just a passing fad -- it’s the modern approach to running a business. The companies that don’t embrace AI in some capacity will not be around in the future to prove everyone else wrong.” Organizations face several challenges when implementing AI strategies. For example, regulatory uncertainty is a significant hurdle and navigating the complex and evolving landscape of AI regulations across different jurisdictions can be daunting. ... “There’s a gap between AI’s theoretical potential and its practical business application. Companies invest millions in AI initiatives that prioritize speed to market over actual utility,” Palmer says.


Work-Life Balance: A Practitioner Viewpoint

Organisation policymakers must ensure a well-funded preventive health screening at all levels so those with identified health risks can be advised and guided suitably on their career choices. They can be helped to step back on their career accelerators, and their needs can be accommodated in the best possible manner. This requires a mature HR policy-making and implementation framework where identifying problems and issues does not negatively impact the employees' careers. Deploying programs that help employees identify and overcome stress issues will be beneficial. A considerable risk for individuals is adopting negative means like alcohol, tobacco, or even getting into a shell to address their stress issues, and that can take an enormous toll on their well-being. Kindling purposeful passion alongside work is yet another strategy. In today's world, an urgent task to be assigned is just a phone call away. One can have some kind of purposeful passion that keeps us engaged alongside our work. This passion will have its purpose; one can fall back on it to keep oneself together and draw inspiration. Purposeful passion can include things such as acquiring a new skill in a sport, learning to play a musical instrument, learning a new dance form, playing with kids, spending quality time with family members in deliberate and planned ways, learning meditation, environment protection and working for other social causes.


The 8 new rules of IT leadership — and what they replace

The CIO domain was once confined to the IT department. But to be tightly partnered and co-lead with the business, CIOs must increasingly extend their expertise across all departments. “In the past they weren’t as open to moving out of their zone. But the role is becoming more fluid. It’s crossing product, engineering, and into the business,” says Erik Brown, an AI and innovation leader in the technology and experience practice at digital services firm West Monroe. Brown compares this new CIO to startup executives, who have experience and knowledge across multiple functional areas, who may hold specific titles but lead teams made up of workers from various departments, and who will shape the actual strategy of the company. “The CIOs are not only seeing strategy, but they will inform it; they can shape where the business is moving, and then they can take that to their teams and help them brainstorm how to support that. And that helps build more impactful teams,” Brown says. He continues: “You look at successful leaders of today and they’re all going to have a blended background. CIOs are far broader in their understanding, and where they’re more shallow, they’ll surround themselves with deputies that have that depth. They’re not going to assume they’re an expert in everything. So they may have an engineering background, for example, and they’ll surround themselves with those who are more experienced in that.”


Managing AI APIs: Best Practices for Secure and Scalable AI API Consumption

Managing AI APIs presents unique challenges compared to traditional APIs. Unlike conventional APIs that primarily facilitate structured data exchange, AI APIs often require high computational resources, dynamic access control and contextual input filtering. Moreover, large language models (LLMs) introduce additional considerations such as prompt engineering, response validation and ethical constraints that demand a specialized API management strategy. To effectively manage AI APIs, organizations need specialized API management strategies that can address unique challenges such as model-specific rate limiting, dynamic request transformations, prompt handling, content moderation and seamless multi-model routing, ensuring secure, efficient and scalable AI consumption. ... As organizations integrate multiple external AI providers, egress AI API management ensures structured, secure and optimized consumption of third-party AI services. This includes governing AI usage, enhancing security, optimizing cost and standardizing AI interactions across multiple providers. Below are some best practices for exposing AI APIs via egress gateways: Optimize Model Selection: Dynamically route requests to AI models based on cost, latency or regulatory constraints. 


Charting the AI-fuelled evolution of embedded analytics

First of all, the technical requirements are high. To fit today’s suite of business tools, embedded analytics have to be extremely fast, lightweight, and very scalable, otherwise they risk dragging down the performance of the entire app. “As development and the web moves to single-page apps using frameworks like Angular and React, it becomes more and more critical that the embedded objects are lightweight, efficient, and scalable. In terms of embedded implementations for the developer, that’s probably one of the biggest things to look out for,” advises Perez. On top of that, there’s security, which is “another gigantic problem and headache for everybody,” observes Perez. “Usually, the user logs into the hosting app and then they need to query data relevant to them, and that involves a security layer.” Balancing the need for fast access to relevant data against the needs for compliance with data privacy regulations and security for your own proprietary information can be a complex juggling act. ... Additionally, the main benefit of embedded analytics is that it makes insights easily accessible to line-of-business users. “It should be very easy to use, with no prior training requirements, it should accept and understand all kinds of requests, and more importantly, it needs to seamlessly work on the company’s internal data,” says Perez.


The Ransomware Payment Ban – Will It Work?

A complete, although targeted, ban on ransom payments for public sector organisations is intended to remove cybercriminals’ financial motivation. However, without adequate investment in resilience, these organisations may be unable to recover as quickly as they need to, putting essential services at risk. Many NHS healthcare providers and local councils are already dealing with outdated infrastructure and cybersecurity staff shortages. If they are expected to withstand ransomware attacks without the option of paying, they must be given the resources, funding, and support to defend themselves and recover effectively. A payment ban may disrupt criminal operations in the short term. However, it doesn’t address the root of the issue – the attacks will persist, and vulnerable systems remain an open door. Cybercriminals are adaptive. If one revenue stream is blocked, they’ll find other ways to exploit weaknesses, whether through data theft, extortion, or targeting less-regulated entities. The requirement for private organisations to report payment intentions before proceeding aims to help authorities track ransomware trends. However, this approach risks delaying essential decisions in high-pressure situations. During a ransomware crisis, decisions must often be made in hours, if not minutes. Adding bureaucratic hurdles to these critical moments could exacerbate operational chaos.


The Modern CIO: Architect of the Intelligent Enterprise

Moving forward, traditional technology-driven CIOs will likely continue to lose leadership influence and C-suite presence as more strategic, business-focused CxOs move in. “There is a growing divergence. And the CIO that plays more of a modern CTO role will not have a set at the table,” Clydesdale-Cotter said. This increased business focus demands CIOs not only have a broad and deep technical understanding of how new technologies impact the nature of their company’s relationship with the broader market and impact on how the business operates, but also command fluency in the vertical markets of their business and not only accountability for the ROI on digital initiatives but the broader success of the business as well. There’s probably no technology having a more significant impact today than AI adoption. ... The maturation of generative AI is moving CIOs from managing pilot deployments to enterprise-scale initiatives. Starting this year, analysts expect about half of CIOs to increasingly prioritize the cultivation of fostering data-centric cultures, ensuring clean, accessible datasets to train their AI models. However, challenges persist: a 2024 Deloitte survey found that 59% of employees resist AI adoption due to job security fears, requiring CIOs to lead change management programs that emphasize upskilling.


7 Steps to Building a Smart, High-Performing Team

Hiring is just the beginning — training is where the real magic happens. One of the biggest mistakes I see business owners make is throwing new hires into the deep end without proper onboarding. ... A strong team is built on clarity. Employees should know exactly what is expected of them from day one. Clear role definitions, performance benchmarks and a structured feedback system help employees stay aligned with company goals. Peter Drucker, often called the father of modern management, once said, "What gets measured gets managed." Establishing key performance indicators (KPIs) ensures that every team member understands how their work contributes to the company's broader objectives. ... Just like in soccer, some players will need a yellow card — a warning that performance needs to improve. The best teams address underperformance before it becomes a chronic issue. A well-structured performance review system, including monthly check-ins and real-time feedback, helps keep employees on track. A study from MIT Sloan Management Review found that teams that receive continuous feedback perform 22% better than those with annual-only reviews. If an employee continues to underperform despite clear feedback and support, it may be time for the red card — letting them go. 


How eBPF is changing container networking

eBPF is revolutionary because it works at the kernel level. Even though containers on the same host have their own isolated view of user space, says Rice, all containers and the host share the same kernel. Applying networking, observability, or security features here makes them instantly available to all containerized applications with little overhead. “A container doesn’t even need to be restarted, or reconfigured, for eBPF-based tools to take effect,” says Rice. Because eBPF operates at the kernel level to implement network policies and operations such as packet routing, filtering, and load balancing, it’s better positioned than other cloud-native networking technologies that work in the user space, says IDC’s Singh. ... “eBPF comes with overhead and complexity that should not be overlooked, such as kernel requirements, which often require newer kernels, additional privileges to run the eBPF programs, and difficulty debugging and troubleshooting when things go wrong,” says Sun. A limited pool of eBPF expertise is available for such troubleshooting, adding to the hesitation. “It is reasonable for service mesh projects to continue using and recommending iptables rules,” she says. Meta’s use of Cilium netkit across millions of containers shows eBPF’s growing usage and utility.


If Architectural Experimentation Is So Great, Why Aren’t You Doing It?

Architectural experimentation is important for two reasons: For functional requirements, MVPs are essential to confirm that you understand what customers really need. Architectural experiments do the same for technical decisions that support the MVP; they confirm that you understand how to satisfy the quality attribute requirements for the MVP. Architectural experiments are also important because they help to reduce the cost of the system over time. This has two parts: you will reduce the cost of developing the system by finding better solutions, earlier, and by not going down technology paths that won’t yield the results you want. Experimentation also pays for itself by reducing the cost of maintaining the system over time by finding more robust solutions. Ultimately running experiments is about saving money - reducing the cost of development by spending less on developing solutions that won’t work or that will cost too much to support. You can’t run experiments on every architectural decision and eliminate the cost of all unexpected changes, but you can run experiments to reduce the risk of being wrong about the most critical decisions. While stakeholders may not understand the technical aspects of your experiments, they can understand the monetary value.


Daily Tech Digest - March 02, 2025


Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr


Weak cyber defenses are exposing critical infrastructure — how enterprises can proactively thwart cunning attackers to protect us all

Weak cybersecurity isn’t merely a corporate issue — it’s a national security risk. The 2021 Colonial Pipeline attack disrupted energy supplies and exposed vulnerabilities in critical industries. Rising geopolitical tensions, especially with China, amplify these risks. Recent breaches attributed to state-sponsored actors have exploited outdated telecommunications equipment and other legacy systems, revealing how complacency in updating technology can put national security in danger. For instance, last year’s hack of U.S. and international telecommunications companies exposed phone lines used by top officials and compromised data from systems for surveillance requests, threatening national security. Weak cybersecurity at these companies risks long-term costs, allowing state-sponsored actors to access sensitive information, influence political decisions and disrupt intelligence efforts. ... No company can face today’s cyber threats on its own. Collaboration between private businesses and government agencies is more than helpful — it’s imperative. Sharing threat intelligence in real-time allows organizations to respond faster and stay ahead of emerging risks. Public-private partnerships can also level the playing field by offering smaller companies access to resources like funding and advanced security tools they might not otherwise afford.


Evaluating the CISO

Delegation skills are an essential component that should be evaluated separately in this area. Effective delegation is essential to prevent becoming a bottleneck, as micromanagement is unsuitable for the CISO role. Delegating complex tasks not only lightens your load but also helps foster the team’s overall competence. Without strong delegation skills, CISOs cannot rate themselves highly in their relationship with the internal security team. ... A CISO is hired to lead, manage, and support specific projects or programs such as migrating to a cloud or hybrid infrastructure, implementing zero-trust principles, launching security awareness initiatives, or assessing risks and creating a roadmap for post-quantum cryptography implementation. The success of these initiatives ultimately falls under the CISO’s responsibility. To execute these programs effectively, the CISO relies heavily on its team and internal organizational peers. As such, building strong relationships with both is essential for successfully delivering projects. ... A CISO must have responsibility for the information security budget, which includes funding for the team, tools, and services. Without direct control over the budget, it becomes challenging to rate the relationship with management highly, as budget ownership is a critical aspect of the CISO’s role.


Unraveling Large Language Model Hallucinations

You might have seen model hallucinations. They are the instances where LLMs generate incorrect, misleading, or entirely fabricated information that appears plausible. These hallucinations happen because LLMs do not “know” facts in the way humans do; instead, they predict words based on patterns in their training data. ... Supervised Fine-Tuning makes the model capable. However, even a well-trained model can generate misleading, biased, or unhelpful responses. Therefore, Reinforcement Learning with Human Feedback is required to align it with human expectations. We start with the assistant model, trained by SFT. For a given prompt we generate multiple model outputs. Human labelers rank or score multiple model outputs based on quality, safety, and alignment with human preferences. We use these data to train a whole separate neural network that we call a reward model. The reward model imitates human scores. It is a simulator of human preferences. It is a completely separate neural network, probably with a transformer architecture, but it is not a language model in the sense that it generates diverse language. It’s just a scoring model.


How to Communicate the Business Value of Master Data Management

In an ideal scenario, MDM is integral to a broader D&A strategy, highlighting how D&A supports the organization's strategic goals. The strategy aligns with these goals, prioritizes the business outcomes it will support, and details what is needed to achieve them. Therefore, leaders must first understand and prioritize the explicit business outcomes that MDM will support before creating an MDM strategy. In other words, "improving decision-making" is not good enough. "Increase customer service levels by 5% by end of December 2025" is the level of detail required. D&A leaders may recognize that master data is causing a problem or limiting an opportunity, which is where they would rely on an MDM. If this is the case, those D&A leaders should consider questions that help identify the problem, KPIs, and key stakeholders in these cases. These questions help identify potential business outcomes that MDM could support. Figure 1 provides a worksheet to build this initial picture and facilitate stakeholder discussions. The worksheet maps high-level goals onto a run-grow-transform framework, which could also be represented by three columns for the primary business value drivers: risk, revenue, and cost.


4 ways to get your business ready for the agentic AI revolution

Agents could be used eventually, but only once a partnership approach identifies the right opportunities. "Agents are becoming a big part of how generative AI and machine learning are used in business today. The way agents will be used in travel will be fascinating to watch. I think this technology will certainly be a part of the mix," he said. "The process for Hyatt will be to find the right technologies -- and we'll do that in close partnership with our business leaders and the technology teams that run the applications. We'll then provide the AI services to drive those transitions for the business." ... Keith Woolley, chief digital and information officer at the University of Bristol, is another digital leader who sees the potential benefits of agents. However, he said these advantages will become manifest over the longer term. "We are looking at agentic AI, but we're not implementing it yet," he said. "We sit as a management team and ask questions like, 'Should we do our admissions process using agentic AI? What would be the advantage?'" Woolley told ZDNET he could envision a situation in which AI and automation help assess and inform candidates worldwide about the status of their applications.


Cloud Giants Collaborate on New Kubernetes Resource Management Tool

The core innovation of kro is the introduction of the ResourceGraphDefinition custom resource. kro encapsulates a Kubernetes deployment and its dependencies into a single API, enabling custom end-user interfaces that expose only the parameters applicable to a non-platform engineer. This masking hides the complexity of API endpoints for Kubernetes and cloud providers that are not useful in a deployment context. ... Kro works seamlessly with the existing cloud provider Kubernetes extensions that are available to manage cloud resources from Kubernetes. These are AWS Controllers for Kubernetes (ACK), Google's Config Connector (KCC), and Azure Service Operator (ASO). kro enables standardised, reusable service templates that promote consistency across different projects and environments, with the benefit of being entirely Kubernetes-native. It is still in the early stages of development. "As an early-stage project, kro is not yet ready for production use, but we still encourage you to test it out in your own Kubernetes development environments," the post states. ... Most significantly for the Crossplane community, Farcic questioned kro's purpose given its functional overlap with existing tools. "kro is serving more or less the same function as other tools created a while ago without any compelling improvement," he observed. 


Why a different approach to AIOps is needed for SD-WAN

AIOps tools enhance efficiency by seamlessly integrating with IT management tools, enabling proactive issue identification and streamlining IT management processes. But more than that, they optimize an organization’s network by improving the performance, efficiency, and dependability of its network resources to ensure optimal user experience. Regarding infrastructure, many organizations now rely on SD-WAN – software-defined wide area network – to manage and optimize data traffic across different types of networks efficiently. SD-WAN is an effective way to connect the organization and provide users with application access. It helps businesses improve their network performance, cut costs, and be more flexible by easily connecting to various network types. ... AIOps tools use the information extracted from SD-WAN systems and autonomously resolve issues without human intervention. In other words, AIOps tools utilize predictive analytics to forecast future events or outcomes related to network operations. This makes the whole system run smoother and more reliably, while machine learning algorithms can use this historical data to make predictions and proactively improve the performance of critical applications.


AI-Driven Threat Detection and the Need for Precision

AI algorithms, particularly those based on machine learning, excel at sifting through massive datasets and identifying patterns that would be nearly impossible for us mere humans to spot. An AI system might analyze network traffic patterns to identify unusual data flows that could indicate a data exfiltration attempt. Alternatively, it could scan email attachments for malicious code that traditional antivirus software might miss. Ultimately, AI feeds on context and content. The effectiveness of these systems in protecting your security posture(link is external) is inextricably linked to the quality of the data they are trained on and the precision of their algorithms. ... Finally, AI-driven threat detection may not eradicate human expertise. Skilled security professionals should still oversee AI systems and make informed decisions based on their own contextual expertise and experience. Human oversight validates the AI's findings, and threat detection algorithms may not be able to totally replace the critical thinking and intuition of human analysts. There may come a time when human professionals exist in AI's shadow. Yet, at this time, combining the power of AI with human knowledge and a commitment to continuous learning can form the building blocks for a sophisticated defense program. 


From Ambiguity to Accountability: Analyzing Recommender System Audits under the DSA

In these early years of the DSA, a range of stakeholders – online platforms, civil society, the European Commission (EC), and national Digital Service Coordinators (DSCs) – must experiment, identify good practices, and share lessons learned. Such iteration is important to ensure an adaptive DSA regime that spurs innovation and responds to shifting technologies, risks, and mitigation strategies. The need for iteration and flexibility, however, should not mean the audits fail to deliver on their potential as vehicles for transparency and accountability. The first round of independent audits of recommender systems reveals clear areas for immediate improvement. Because the core definitions and methodologies were developed independently by platforms and auditors, significant inconsistencies exist in both risk assessment and audit processes. ... The DSA requires the main parameters of recommender systems to be spelled out in plain and intelligible language. What does this concretely mean in the recommender system context? Is it free of “acronyms or complex/technical terminology” (Pinterest), “straightforward vocabulary and easy to perceive, understand, or interpret” (Snap), or “written for a general audience with varying technical skill levels, inclusive of all users” (TikTok)? There's a subtle difference in expectations associated with each framing. These terms don’t need to be defined in a vacuum.


Cybersecurity in retail: What does the future hold?

In the coming year, cybersecurity experts predict attackers will increasingly target Generative AI models used by retailers, creating significant potential for operational disruptions and data breaches. These AI systems, now critical to retail operations, are vulnerable to sophisticated attacks that could compromise customer service efficiency and expose critical business vulnerabilities. The core risk lies in the sophisticated ways attackers can exploit AI’s complex decision-making processes, turning what was once a technological advantage into a potential security liability. Retailers must recognise that their AI systems are not just technological tools, but potential entry points for cybercriminal activities. ... The complexity and distribution of digital ecosystems make them prime targets during high-demand periods. For example, as we have seen in the past, cyberattacks that hit supply chains can cause major delays and financial loss. These incidents underscore the vulnerabilities in supply chains during peak times of the year​. In 2025, expect a rise in supply chain attacks during the holiday season, targeting ecommerce platforms and logistics providers, which could disrupt product availability and shipping.

Daily Tech Digest - March 01, 2025


Quote for the day:

"Your life does not get better by chance, it gets better by change." -- Jim Rohn


Two AI developer strategies: Hire engineers or let AI do the work

Philip Walsh, director analyst in Gartner’s software engineering practice, said that from his vantage point he sees “two contrasting signals: some leaders, like Marc Benioff at Salesforce, suggest they may not need as many engineers due to AI’s impact, while others — Alibaba being a prime example — are actively scaling their technical teams and specifically hiring for AI-oriented roles.” In practice, he said, Gartner believes AI is far more likely to expand the need for software engineering talent. “AI adoption in software development is early and uneven,” he said, “and most large enterprises are still early in deploying AI for software development — especially beyond pilots or small-scale trials.” Walsh noted that, while there is a lot of interest in AI-based coding assistants (Gartner sees roughly 80% of large enterprises piloting or deploying them), actual active usage among developers is often much lower. “Many organizations report usage rates of 30% or less among those who have access to these tools,” he said, adding that the most common tools are not yet generating sufficient productivity gains to generate cost savings or headcount reductions. He said, “current solutions often require strong human supervision to avoid errors or endless loops. Even as these technologies mature over the next two to three years, human expertise will remain critical.”


The Great AI shift: The rise of ‘services as software’

Today, AI is pushing the envelope by turning services built to be used by humans as ‘self-serve’ utilities into automatically-running software solutions that execute autonomously—a paradigm shift the venture capital world, in particular, has termed ‘Services as Software’ ... The shift is already conspicuous across industries. AI tools like Harvey AI are transforming the legal and compliance sector by analysing case law and generating legal briefs, essentially replacing human research assistants. The customer support ecosystem that once required large human teams in call centres now handles significant query volumes daily with AI chatbots and virtual agents. ... The AI-driven shift brings into question the traditional notion of availing an ‘expert service’. Software development,legal, and financial services are all coveted industries where workers are considered ‘experts’ delivering specialised services. The human role will undergo tremendous redefinition and will require calibrated re-skilling. ... Businesses won't simply replace SaaS with AI-powered tools; they will build the company's processes and systems around these new systems. Instead of hiring marketing agencies, companies will use AI to generate dynamic marketing and advertising campaigns. Businesses will rely on AI-driven quality assurance and control instead of outsourcing software testing, Quality Assurance, and Quality Control.


Resilience, Observability and Unintended Consequences of Automation

Instead of thinking of replacing work that humans might make or do, it's augmenting that work. And how do we make it easier for us to do these kinds of jobs? And that might be writing code, that might be deploying it, that might be tackling incidents when they come up, but understanding what the fancy, nerdy academic jargon for this is joint cognitive systems. But thinking instead of replacement or our functional allocation, another good nerdy academic term, we'll give you this piece, we'll give the humans those pieces. How do we have a joint system where that automation is really supporting the work of the humans in this complex system? And in particular, how do you allow them to troubleshoot that, to introspect that, to actually understand and to have even maybe the very nerdy versions of this research lay out possible ways of thinking about what can these computers do to help us? ... We could go monolith to microservices, we could go pick your digital transformation. How long did that take you? And how much care did you put into that? Maybe some of it was too long or too bureaucratic or what have you, but I would argue that we tend to YOLO internal developer technology way faster and way looser than we do with the things that actually make us money as that is the perception, the things that actually make us money.


The Modern CDN Means Complex Decisions for Developers

“Developers should not have to be experts on how to scale an application; that should just be automatic. But equally, they should not have to be experts on where to serve an application to stay compliant with all these different patchworks of requirements; that should be more or less automatic,” Engates argues. “You should be able to flip a few switches and say ‘I need to be XYZ compliant in these countries,’ and the policy should then flow across that network and orchestrate where traffic is encrypted and where it’s served and where it’s delivered and what constraints are around it.” ... Along with the physical constraint of the speed of light and the rise of data protection and compliance regimes, Alexander also highlights the challenge of costs as something developers want modern CDNs to help them with. “Egress fees between clouds are one of the artificial barriers put in place,” he claims. That can be 10%, 20% or even 30% of overall cloud spend. “People can’t build the application that they want, they can’t optimize, because of some of these taxes that are added on moving data around.” Update patterns aren’t always straightforward either. Take a wiki like Fandom, where Fastly founder and CTO Artur Bergman was previously CTO. 


A Comprehensive Look at OSINT

Cybersecurity professionals within corporations rely on public data to identify emerging phishing campaigns, data breaches, or malicious activity targeting their brand. Investigative journalists and academic researchers turn to OSINT for fact-checking, identifying new leads, and gathering reliable support for their reporting or studies. ... Avoiding OSINT or downplaying its value can leave organizations unaware of threats and opportunities that are readily discoverable to others. By failing to gather open-source data, businesses and government agencies could remain in the dark about malicious activities, negative brand impersonations, or stolen credentials circulating on forums and dark web marketplaces. In the event of a security breach or public scandal, stakeholders may view the lack of proper OSINT measures as a failure of due diligence, eroding trust and tarnishing the organization’s image. ... The primary driver behind OSINT’s growth is the vast reservoir of information generated daily by digital platforms, databases, and news outlets. This public data can be invaluable for enhancing security, improving transparency, and making more informed decisions. Security professionals, for instance, can preemptively identify threats and vulnerabilities posted openly by malicious actors. 


OT/ICS cyber threats escalate as geopolitical conflicts intensify

A persistent lack of visibility into OT environments continues to obscure the full scale of these attacks. These insights come from Dragos’ 2025 OT/ICS Cybersecurity Report, its eighth annual Year in Review, which analyzes industrial organizations’ cyber threats. .., VOLTZITE is arguably the most crucial threat group to track in critical infrastructure. Due to its dedicated focus on OT data, the group is a capable threat to ICS asset owners and operators. This group shares extensive technical overlaps with the Volt Typhoon threat group tracked by other organizations. It utilizes the same techniques as in previous years, setting up complex chains of network infrastructure to target, compromise, and steal compromising OT-relevant data—GIS data, OT network diagrams, OT operating instructions, etc.—from victim ICS organizations. ... Increasing collaboration between hacktivist groups and state-backed cyber actors has led to a hybrid threat model where hacktivists amplify state objectives, either directly or through shared infrastructure and intelligence. State actors increasingly look to exploit hacktivist groups as proxies to conduct deniable cyber operations, allowing for more aggressive attacks with reduced attribution risks.


Leveraging AR & VR for Remote Maintenance in Industrial IoT

AR tools like Microsoft’s HoloLens 2 are enabling workers on-site to receive real-time guidance from experts located anywhere in the world. Using AR glasses or headsets, on-site personnel can share their view with remote technicians, who can then overlay instructions, schematics, or step-by-step troubleshooting guidance directly onto the worker’s field of vision. This allows maintenance teams to resolve issues faster and more accurately, without the need for travel, reducing downtime and operational costs. ... By using VR simulations, workers can familiarize themselves with equipment, troubleshoot issues, and practice responses to emergencies, all in a virtual setting. This hands-on experience builds confidence and competence, ultimately improving safety and efficiency when dealing with real equipment. As IIoT systems become more sophisticated, VR training can play a key role in ensuring that the workforce is well-prepared to handle advanced technologies without risking costly mistakes or accidents. ... In the future, we can expect even more seamless integration between AR/VR systems and IIoT platforms, where real-time data from sensors and machines is directly fed into the AR/VR environment, providing a comprehensive view of machine health, performance and issues. 


Just as DNA defines an organism’s identity, business continuity must be deeply embedded in every aspect of your organization. It is more than just a collection of emergency plans or procedures; it embodies a philosophy that ensures not only survival during disruptions, but long-term sustainability as well. ... An organization without continuity is like a tree without roots—fragile and vulnerable to the slightest shock. Continuity serves as an anchor, allowing organizations to navigate crises while staying aligned with their strategic goals. Any organization that aims to grow and thrive must take a proactive approach to continuity. Continuity strategies and initiatives can be seen as the roots of a tree, natural extensions that provide stability and sustain growth. ... It is essential that both leaders and team members possess the experience and skills needed to execute their work effectively. ... Thoroughly assess your key vulnerabilities. This involves two primary methods: a BIA, which analyzes the impacts of a disturbance over time to determine recovery priorities, resource requirements, and appropriate responses; and risk analysis, which identifies risks tied to prioritized activities and critical resources. Together, these two approaches offer a comprehensive understanding of your organization’s pain points.


Keep Your Network Safe From the Double Trouble of a ‘Compound Physical-Cyber Threat'

This phenomenon, a “compound physical-cyber threat,” where a cyberattack is intentionally launched around a heatwave or hurricane, for example, would have outsized and potentially devastating effects on businesses, communities, and entire economies, according to a 2024 study led by researchers at Johns Hopkins University. “Cyber-attacks are more disruptive when infrastructure components face stresses beyond normal operating conditions,” the study asserted. Businesses and their IT and risk management people would be wise to take notice, because both cyberattacks and weather-related disasters are increasing in frequency and in the cost they exact from their victims. ... Take what you learn from the risk assessment to develop a detailed plan that outlines the steps your organization intends to take to preserve cybersecurity, business continuity, and network connectivity during a crisis. Whether you’re a B2B or B2C organization, your customers, employees, suppliers and other stakeholders expect your business to be “always on,” 24/7/365. How will you keep the lights on, the lines of communications open, and your network insulated from cyberattack during a disaster? 


‘It Won’t Happen to Us:’ The Dangerous Mindset Minimizing Crisis Preparation

The main mistakes in crisis situations include companies staying silent and not releasing official statements from management, creating a vacuum of information and promoting the spread of rumors. ... First and foremost, companies should not underestimate the importance of communication, especially when things are not going well. During a crisis, many companies prefer to sit quietly and wait without informing or sharing anything about their measures and actions in connection with the crisis. This is the wrong approach. Silence gives competitors enough space to thrive and gain a market advantage. Meanwhile, journalists won’t stop working on hot stories. When you don’t share anything meaningful with them or your audience, they may collect and publish rumors and misinformation about your company. And the lack of comments creates the ground for negative interpretations. Therefore, transparency and efficiency are key principles of anti-crisis communication. If you are clear in your messages and give quick responses, it allows the company to control the information agenda. The surefire way to gain and maintain trust is to promptly and regularly inform your company’s investors during a crisis through your own channels. 

Daily Tech Digest - February 28, 2025


Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel


Microservice Integration Testing a Pain? Try Shadow Testing

Shadow testing is especially useful for microservices with frequent deployments, helping services evolve without breaking dependencies. It validates schema and API changes early, reducing risk before consumer impact. It also assesses performance under real conditions and ensures proper compatibility with third-party services. ... Shadow testing doesn’t replace traditional testing but rather complements it by reducing reliance on fragile integration tests. While unit tests remain essential for validating logic and end-to-end tests catch high-level failures, shadow testing fills the gap of real-world validation without disrupting users. Shadow testing follows a common pattern regardless of environment and has been implemented by tools like Diffy from Twitter/X, which introduced automated-response comparisons to detect discrepancies effectively. ... The environment where shadow testing is performed may vary, providing different benefits. More realistic environments are obviously better:Staging shadow testing — Easier to set up, avoids compliance and data isolation issues, and can use synthetic or anonymized production traffic to validate changes safely. Production shadow testing — Provides the most accurate validation using live traffic but requires safeguards for data handling, compliance and test workload isolation. 


The rising threat of shadow AI

Creating an Office of Responsible AI can play a vital role in a governance model. This office should include representatives from IT, security, legal, compliance, and human resources to ensure that all facets of the organization have input in decision-making regarding AI tools. This collaborative approach can help mitigate the risks associated with shadow AI applications. You want to ensure that employees have secure and sanctioned tools. Don’t forbid AI—teach people how to use it safely. Indeed, the “ban all tools” approach never works; it lowers morale, causes turnover, and may even create legal or HR issues. The call to action is clear: Cloud security administrators must proactively address the shadow AI challenge. This involves auditing current AI usage within the organization and continuously monitoring network traffic and data flows for any signs of unauthorized tool deployment. Yes, we’re creating AI cops. However, don’t think they get to run around and point fingers at people or let your cloud providers point fingers at you. This is one of those problems that can only be solved with a proactive education program aimed at making employees more productive and not afraid of getting fired. Shadow AI is yet another buzzword to track, but also it’s undeniably a growing problem for cloud computing security administrators. 


Can AI live up to its promise?

The debate about truly transformative AI may not be about whether it can think or be conscious like a human, but rather about its ability to perform complex tasks across different domains autonomously and effectively. It is important to recognize that the value and usefulness of machines does not depend on their ability to exactly replicate human thought and cognitive abilities, but rather on their ability to achieve similar or better results through different methods. Although the human brain has inspired much of the development of contemporary AI, it need not be the definitive model for the design of superior AI. Perhaps by freeing the development of AI from strict neural emulation, researchers can explore novel architectures and approaches that optimize different objectives, constraints, and capabilities, potentially overcoming the limitations of human cognition in certain contexts. ... Some human factors that could be stumbling blocks on the road to transformative AI include: the information overload we receive, the possible misalignment with our human values, the possible negative perception we may be acquiring, the view of AI as our competitor, the excessive dependence on human experience, the possible perception of futility of ethics in AI, the loss of trust, overregulation, diluted efforts in research and application, the idea of human obsolescence, or the possibility of an “AI-cracy”, for example.


The end of net neutrality: A wake-up call for a decentralized internet

We live in a time when the true ideals of a free and open internet are under attack. The most recent repeal of net neutrality regulations is taking us toward a more centralized, controlled version of the internet. In this scenario, a decentralized, permissionless internet offers a powerful alternative to today’s reality. Decentralized systems can address the threat of censorship by distributing content across a network of nodes, ensuring that no single entity can block or suppress information. Decentralized physical infrastructure networks (DePIN) demonstrate how decentralized storage can keep data accessible even when network parts are disrupted or taken offline. This censorship resistance is crucial in regions where governments or corporations try to limit free expression online. Decentralization can also cultivate economic democracy by eliminating intermediaries like ISPs and related fees. Blockchain-based platforms allow smaller, newer players to compete with incumbent services and content companies on a level playing field. The Helium network, for example, uses a decentralized model to challenge traditional telecom monopolies with community-driven wireless infrastructure. In a decentralized system, developers don’t need approval from ISPs to launch new services.


Steering by insights: A C-Suite guide to make data work for everyone

With massive volumes of data to make sense of, having reliable and scalable modern data architectures that can organise and store data in a structured, secure, and governed manner while ensuring data reliability and integrity is critical. This is especially true in the hybrid, multi-cloud environment in which companies operate today. Furthermore, as we face a new “AI summer”, executives are experiencing increased pressure to respond to the tsunami of hype around AI and its promise to enhance efficiency and competitive differentiation. This means companies will need to rely on high-quality, verifiable data to implement AI-powered technologies Generative AI and Large Language Models (LLMs) at an enterprise scale. ... Beyond infrastructure, companies in India need to look at ways to create a culture of data. In today’s digital-first organisations, many businesses require real-time analytics to operate efficiently. To enable this, organisations need to create data platforms that are easy to use and equipped with the latest tools and controls so that employees at every level can get their hands on the right data to unlock productivity, saving them valuable time for other strategic priorities. Building a data culture also needs to come from the top; it is imperative to ensure that data is valued and used strategically and consistently to drive decision-making.


The Hidden Cost of Compliance: When Regulations Weaken Security

What might be a bit surprising, however, is one particular pain point that customers in this vertical bring up repeatedly. What is this mysterious pain point? I’m not sure if it has an official name or not, but many people I meet with share with me that they are spending so much time responding to regulatory findings that they hardly have time for anything else. This is troubling to say the least. It may be an uncomfortable discussion to have, but I’d argue that it is long since past the time we as a security community have this discussion. ... The threats enterprises face change and evolve quickly – even rapidly I might say. Regulations often have trouble keeping up with the pace of that change. This means that enterprises are often forced to solve last year’s or even last decade’s problems, rather than the problems that might actually pose a far greater threat to the enterprise. In my opinion, regulatory agencies need to move more quickly to keep pace with the changing threat landscape. ... Regulations are often produced by large, bureaucratic bodies that do not move particularly quickly. This means that if some part of the regulation is ineffective, overly burdensome, impractical, or otherwise needs adjusting, it may take some time before this change happens. In the interim, enterprises have no choice but to comply with something that the regulatory body has already acknowledged needs adjusting.


Why the future of privileged access must include IoT – securing the unseen

The application of PAM to IoT devices brings unique complexities. The vast variety of IoT devices, many of which have been operational for years, often lack built-in security, user interfaces, or associated users. Unlike traditional identity management, which revolves around human credentials, IoT devices rely on keys and certificates, with each device undergoing a complex identity lifecycle over its operational lifespan. Managing these identities across thousands of devices is a resource-intensive task, exacerbated by constrained IT budgets and staff shortages. ... Implementing a PAM solution for IoT involves several steps. Before anything else, organisations need to achieve visibility of their network. Many currently lack this crucial insight, making it difficult to identify vulnerabilities or manage device access effectively. Once this visibility is achieved, organisations must then identify and secure high-risk privileged accounts to prevent them from becoming entry points for attackers. Automated credential management is essential to replace manual password processes, ensuring consistency and reducing oversight. Policies must be enforced to authorise access based on pre-defined rules, guaranteeing secure connections from the outset. Default credentials – a common exploit for attackers – should be updated regularly, and automation can handle this efficiently. 


Understanding the AI Act and its compliance challenges

There is a clear tension between the transparency obligations imposed on providers of certain AI systems under the AI Act and some of their rights and business interests, such as the protection of trade secrets and intellectual property. The EU legislator has expressly recognized this tension, as multiple provisions of the AI Act state that transparency obligations are without prejudice to intellectual property rights. For example, Article 53 of the AI Act, which requires providers of general-purpose AI models to provide certain information to organizations that wish to integrate the model downstream, explicitly calls out the need to observe and protect intellectual property rights and confidential business information or trade secrets. In practice, a good faith effort from all parties will be required to find the appropriate balance between the need for transparency to ensure safe, reliable and trustworthy AI, while protecting the interests of providers that invest significant resources in AI development. ... The AI Act imposes a number of obligations on AI system vendors that will help in-house lawyers in carrying out this diligence. Under Article 13 of the AI Act, vendors of high-risk AI systems are, for example, required to provide sufficient information to (business) deployers to allow them to understand the high-risk AI system’s operation and interpret its output.


Why fast-learning robots are wearing Meta glasses

The technology acts as a sophisticated translator between human and robotic movement. Using mathematical techniques called Gaussian normalization, the system maps the rotations of a human wrist to the precise joint angles of a robot arm, ensuring natural motions get converted into mechanical actions without dangerous exaggerations. This movement translation works alongside a shared visual understanding — both the human demonstrator’s smartglasses and the robot’s cameras feed into the same artificial intelligence program, creating common ground for interpreting objects and environments. ... The EgoMimic researchers didn’t invent the concept of using consumer electronics to train robots. One pioneer in the field, a former healthcare-robot researcher named Dr. Sarah Zhang, has demonstrated 40% improvements in the speed of training healthcare robots using smartphones and digital cameras; they enable nurses to teach robots through gestures, voice commands, and real-time demonstrations instead of complicated programming. This improved robot training is made possible by AI that can learn from fewer examples. A nurse might show a robot how to deliver medications twice, and the robot generalizes the task to handle variations like avoiding obstacles or adjusting schedules. 


Targeted by Ransomware, Middle East Banks Shore Up Security

The financial services industry in UAE — and the Middle East at large — sees cyber wargaming as an important way to identify weaknesses and develop defenses to the latest threats, Jamal Saleh, director general of the UAE Banks Federation, said in a statement announcing the completion of the event. "The rapid adoption and deployment of advanced technologies in the banking and financial sector have increased risks related to transaction security and digital infrastructure," he said in the statement, adding that the sector is increasingly aware "of the importance of such initiatives to enhance cybersecurity systems and ensure a secure and advanced environment for customers, especially with the rapid developments in modern technology and the rise of cybersecurity threats using advanced artificial intelligence (AI) techniques." ... Ransomware remains a major threat to the financial industry, but attackers have shifted from distributed denial-of-service (DDoS) attacks to phishing, data breaches, and identity-focused attacks, according to Shilpi Handa, associate research director for the Middle East, Turkey, and Africa at business intelligence firm IDC. "We see trends such as increased investment in identity and data security, the adoption of integrated security platforms, and a focus on operational technology security in the finance sector," she says. 

Daily Tech Digest - February 27, 2025


Quote for the day:

“You get in life what you have the courage to ask for.” -- Nancy D. Solomon



Breach Notification Service Tackles Infostealing Malware

Infostealers can amass massive quantities of credentials. To handle this glut, many cybercriminals create parsers to quickly ingest usernames and passwords for analysis, said Milivoj Rajić, head of threat intelligence at cybersecurity firm DynaRisk. The leaked internal communications of ransomware group Black Basta demonstrated this tactic, he said. Using a shared spreadsheet, the group identified organizations with emails present in infostealer logs, tested which access credentials worked, checked the organization's annual revenue and if its networks were protected by MFA. Using this information helped the ransomware group prioritize its targeting. Another measure of just how much data gets collected by infostealers: the Alien Txtbase records include 244 million passwords not already recorded as breached by Pwned Passwords. Hunt launched that free service in 2017, which anyone can query for free and anonymously, to help users never pick a password that's appeared in a known data breach, shortly after the U.S. National Institute for Standards and Technology began recommending that practice. Not all of the information contained in stealer logs being sold by criminals is necessarily legit. Some of it might be recycled from previous leaks or data dumps. Even so, Hunt said he was able to verify a random sample of the Alien Txtbase corpus with a "handful" of HIBP users he approached.


The critical role of strategic workforce planning in the age of AI

While some companies have successfully deployed strategic workforce planning in the past to reshape their workforces to meet future market requirements, there are also cautionary tales of organizations that have struggled with the transition to new technologies. For instance, the rapid innovation of smartphones left leading players such as Nokia behind. Periods of rapid technological change highlight the importance of predicting and responding to challenges with a dynamic talent planning model. Gen AI is not just another technological advancement affecting specific tasks; it represents a rewiring of how organizations operate and generate value. This transformation goes beyond automation, innovation, and productivity improvements to fundamentally alter the ratio of humans to technology in organizations. By having SWP in place, organizations can react more quickly and intentionally to these changes, monitoring leading and lagging indicators to stay ahead of the curve. This approach allows for identifying and developing new capabilities, ensuring that the workforce is prepared for the evolving demands these changes will bring. SWP gives a fact base to all talent decisions so that trade-offs can be explicitly discussed and strategic decisions can be made holistically—and with enterprise value top of mind. 


Cybersecurity in fintech: Protecting user data and preventing fraud

Fintech companies operate at the intersection of finance and technology, making them particularly vulnerable to cyber threats. These platforms process vast amounts of personal and financial data—from bank account details and credit card numbers to loan records and transaction histories. A single security breach can have devastating consequences, leading to financial losses, regulatory penalties, and reputational damage. Beyond individual risks, fintech platforms are interconnected within a larger financial ecosystem. A vulnerability in one system can cascade across multiple institutions, disrupting transactions, exposing sensitive data, and eroding trust. Given this landscape, cybersecurity in fintech is not just about preventing attacks—it’s about ensuring the integrity of the entire digital financial infrastructure. ... Governments and regulatory bodies worldwide recognise the critical role of cybersecurity in fintech. Frameworks like the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the U.S. set stringent standards for data privacy and security. Compliance is not just a legal necessity—it’s an opportunity for fintech companies to build trust with users. By adhering to global security best practices, fintech firms can differentiate themselves in an increasingly competitive market while ensuring customer data remains protected.


The Smart Entrepreneur's Guide to Thriving in Uncertain Times

If there's one certainty in business, it's change. The most successful entrepreneurs aren't just those who have great ideas — they are the ones who know how to adapt. Whether it's economic downturns, shifts in consumer behavior or emerging competition, the ability to navigate uncertainty is what separates sustainable businesses from those that struggle to survive. ... Instead of long-term strategies that assume stability, use quick experiments to validate new ideas and adjust quickly. When we launched new membership models at our office, we tested different pricing structures and adjusted based on user feedback within weeks rather than months. ... Digital engagement is changing. Entrepreneurs who optimize their messaging based on social media trends and consumer preferences gain a competitive edge. For example, when we noticed an increase in demand for remote work solutions, we adjusted our marketing efforts to highlight our virtual office plans. ... strong company culture that embraces change enables faster adaptation during challenging times. Jim Collins, in Good to Great, emphasizes that having the right people in the right seats is fundamental for long-term success. At Coworking Smart, we focused on hiring individuals who thrived in dynamic environments rather than just filling positions based on traditional job descriptions.


Risk Management for the IT Supply Chain

Who are your mission critical vendors? Do they present significant risks (for example, risk of a merger, or going out of business)? Where are your IT supply chain “weak links” (such as vendors whose products and services repeatedly fail). Are they impairing your ability to provide top-grade IT to the business? What countries do you operate in? Are there technology and support issues that could emerge in those locations? Do you annually send questionnaires to vendors that query them so you can ascertain that they are strong, reliable and trustworthy suppliers? Do you request your auditors periodically review IT supply chain vendors for resiliency, compliance and security? ... Most enterprises include security and compliance checkpoints on their initial dealings with vendors, but few check back with the vendors on a regular basis after the contracts are signed. Security and governance guidelines change from year to year. Have your IT vendors kept up? When was the last time you requested their latest security and governance audit reports from them? Verifying that vendors stay in step with your company’s security and governance requirements should be done annually. ... Although companies include their production supply chains in their corporate risk management plans, they don’t consistently consider the IT supply chain and its risks.


IT infrastructure: Inventory before AIOps

Even if the advantages are clear, the right story is also needed internally to initiate an introduction. Benedikt Ernst from the IBM spin-off Kyndryl sees a certain “shock potential,” especially in the financial dimension, which is ideally anticipated in advance: “The argumentation of costs is crucial because the introduction of AIOps is, of course, an investment in the first instance. Organizations need to ask themselves: How quickly is a problem detected and resolved today? And how does an accelerated resolution affect operating costs and downtime?” In addition, there is another aspect that he believes is too often overlooked: “Ultimately, the introduction of AIOps also reveals potential on the employee side. The fewer manual interventions in the infrastructure are necessary, the more employees can focus on things that really require their attention. For this reason, I see the use of open integration platforms as helpful in making automation and AIOps usable across different platforms.” Storm Reply’s Henckel even sees AIOps as a tool for greater harmony: “The introduction of AIOps also means an end to finger-pointing between departments. With all the different sources of error — database, server, operating system — it used to be difficult to pinpoint the cause of the error. AIOps provides detailed analysis across all areas and brings more harmony to infrastructure evaluation.”


Navigating Supply Chain Risk in AI Chips

The fragmented nature of semiconductor production poses significant challenges for supplier risk management. Beyond the risk posed by delays in delivery or production, which can disrupt operations, such a globalized and complex supply chain poses challenges from a regulatory angle. C chipmakers must take full responsibility for ensuring compliance at every level by thoroughly monitoring and vetting every entity in the supply chain for risks such as forced labor, sanctions violations, bribery, and corruption. ... Many companies are diversifying their supplier base, increasing local procurement efforts, and using predictive modeling to anticipate better demand to address the risk of disruption triggered by delays in delivery or operations. By leveraging advanced data analytics and securing multiple supply routes, businesses can better increase resilience to external shocks and mitigate the risk of supply chain delays. Additionally, firms can incorporate a “value at risk” model into supply chain and operational risk management frameworks. This approach quantifies the financial impact of potential supply chain disruptions, helping chipmakers prioritize the most critical risk areas. ... The AI chip supply chain is a cornerstone of modern innovation, but due to its global and interdependent nature, it is inherently complex. 


Charting the AI-fuelled evolution of embedded analytics

The idea behind embedded analytics is to negate a great deal of the friction around data insights. In theory, line-of-business users have been able to view relevant insights for a long time, by allowing them to import data into the self-service business intelligence (SSBI) tool of their choice. In practice, this disrupts their workflow and interrupts their chain of thought, so a lot of people choose not to make that switch. They’re even less likely to do so if they have to manually export and migrate the data to a different tool. That means they’re missing out on data insights, just when they could be the most valuable for their decisions. Embedded analytics delivers all the charts and insights alongside whatever the user is working on at the time – be it an accounting app, a CRM, a social media management platform or whatever else – which is far more useful. “It’s a lot more intuitive, a lot more functional if it’s in the same place,” says Perez. “Also, generally speaking, the people who use these types of business apps are non-technical, and so the more complicated you make it for them to get to the analysis, the less of it they’ll do.” ... So far, so impressive. But Perez emphasises that there are a number of barriers to embedded analytics utopia. Businesses need to bear these in mind as they seek to develop their own solutions or find providers who can deliver them.


Open source software vulnerabilities found in 86% of codebases

jQuery, a JavaScript library, was the most frequent source of vulnerabilities, as eight of the top 10 high-risk vulnerabilities were found there. Among scanned applications, 43% contained some version of jQuery — oftentimes, an outdated version. An XSS vulnerability affecting outdated versions of jQuery, called CVE-2020-11023, was the most frequently found high-risk vulnerability. McGuire remarks, “There’s also an interesting shift towards web-based and multi-tenant (SaaS) applications, meaning more high-severity vulnerabilities (81% of audited codebases). We also observed an overwhelming majority of high severity vulnerabilities belonging to jQuery. ... McGuire explains, “Embedded software providers are going to be increasingly focused on the quality, safety and reliability of the software they build. Looking at this year’s data, 79% of the codebases were using components whose latest versions had no development activity in the last two years. This means that these dependencies could become less reliable, so industries, like aerospace and medical devices should look to identify these in their own codebases and start moving on from them.” ... “Enterprise regulated organizations are being forced to align with numerous requirements, including providing SBOMs with their applications. If an SBOM isn’t accurate, it’s useless,” McGuire states. 


A 5-step blueprint for cyber resilience

Many claim to practice developer security operations, or DevSecOps, by testing software for security flaws at every stage. At least that's the theory. In reality, developers are under constant pressure to get software into production, and DevSecOps can be an impediment to meeting deadlines. "You hear all these people saying, 'Yes, we're doing DevSecOps,' but the reality is, a lot of people aren't," says Lanowitz. "If you're really focused on being secure by design, you're going to want to do things right from the beginning, meaning you're going to want to have your network architecture correct, your software architecture correct." ... "We have to be able to speak the language of the business," says Lanowitz. "Break down the silos that exist in the organization, get the cyber team and the business team talking, [and] align cybersecurity initiatives with overarching business initiatives." Again, executive leadership needs to point the way, but it often needs convincing. Compliance is a great place to start, because most industries have rules, laws, or insurance providers that mandate a basic level of cybersecurity. ... The more eyes you have on a cybersecurity problem, the more quickly a solution can be found. Because of this, even large companies rely on external managed service providers (MSPs), managed security service providers (MSSPs), managed detection and response (MDR) providers, consultants and advisors.