Daily Tech Digest - April 28, 2025


Quote for the day:

"If a window of opportunity appears, don't pull down the shade." -- Tom Peters



Researchers Revolutionize Fraud Detection with Machine Learning

Machine learning plays a critical role in fraud detection by identifying patterns and anomalies in real-time. It analyzes large datasets to spot normal behavior and flag significant deviations, such as unusual transactions or account access. However, fraud detection is challenging because fraud cases are much rarer than normal ones, and the data is often messy or unlabeled. ... “The use of machine learning in fraud detection brings many advantages,” said Taghi Khoshgoftaar, Ph.D., senior author and Motorola Professor in the FAU Department of Electrical Engineering and Computer Science. “Machine learning algorithms can label data much faster than human annotation, significantly improving efficiency. Our method represents a major advancement in fraud detection, especially in highly imbalanced datasets. It reduces the workload by minimizing cases that require further inspection, which is crucial in sectors like Medicare and credit card fraud, where fast data processing is vital to prevent financial losses and enhance operational efficiency.” ... The method combines two strategies: an ensemble of three unsupervised learning techniques using the SciKit-learn library and a percentile-gradient approach. The goal is to minimize false positives by focusing on the most confidently identified fraud cases. 


Cybersecurity is Not Working: Time to Try Something Else

Many CISOs, changing jobs every 2 years or so, have not learnt to get things done in large firms; they have not developed the political acumen and the management experience they would need. Many have simply remained technologists and firefighters, trapped in an increasingly obsolete mindset, pushing bottom-up a tools-based, risk-based, tech-driven narrative, disconnected from what the board wants to hear which has now shifted towards resilience and execution. This is why we may have to come to the point where we have to accept that the construction around the role of the CISO, as it was initiated in the late 90s, has served its purpose and needs to evolve. The first step in this evolution, in my opinion, is for the board to own cybersecurity as a business problem, not as a technology problem. It needs to be owned at board level in business terms, in line with the way other topics are owned at board level. This is about thinking the protection of the business in business terms, not in technology terms. Cybersecurity is not a purely technological matter; it has never been and cannot be. ... There may be a need to amalgamate it with other matters such as corporate resilience, business continuity or data privacy to build up a suitable board-level portfolio, but for me this is the way forward in reversing the long-term dynamics, away from the failed historical bottom-up constructions, towards a progressive top-down approach.


How the financial services C-suite are going beyond ‘keeping the lights on’ in 2025

C-suites need to tackle three core areas: Ensure they are getting high value support for their mission-critical systems; Know how to optimise their investments; Transform their organisation without disrupting day-to-day operations. It is certainly possible if they have the time, capabilities and skill sets in-house. Yet even the most well-resourced enterprises can struggle to acquire the knowledge base and market expertise required to negotiate with multiple vendors, unlock investments or run complex change programmes single-handedly. The reality is that managing the balance needed to save costs while accelerating innovation is challenging. ... The survey demonstrates a growing necessity for CIOs and CFOs to speak each other’s language, marking a shift in organisational strategy, moving IT beyond the traditional ‘keeping the lights on’ approach, and driving a pivotal transformation in the relationship between CIOs and CFOs. As they find better ways to collaborate and innovate, businesses in the financial services space will reap the rewards of emerging technology, while falling in line with budgetary needs. Emerging technologies are being introduced thick and fast, and as a result, hard metrics aren’t always available. Instead of feeling frustrated with a lack of data, CFOs should lean in as active participants, understanding how emerging technologies like AI and cybersecurity can drive strategic value, optimise operations and create new revenue streams. 


Threat actors are scanning your environment, even if you’re not

Martin Jartelius, CISO and Product Owner at Outpost24, says that the most common blind spots that the solution uncovers are exposed management interfaces of devices, exposed databases in the cloud, misconfigured S3 storage, or just a range of older infrastructure no longer in use but still connected to the organization’s domain. All of these can provide an entry point into internal networks, and some can be used to impersonate organizations in targeted phishing attacks. But these blind spots are not indicative of poor leadership or IT security performance: “Most who see a comprehensive report of their attack surface for the first time are surprised that it is often substantially larger than they understood. Some react with discomfort and perceive their prior lack of insight as a failure, but that is not the case. ... Attack surface management is still a maturing technology field, but having a solution bringing the information together in a platform gives a more refined and in-depth insight over time. External attack surface management starts with a continuous detection of exposed assets – in Sweepatic’s case, that also includes advanced port scanning to detect all (and not just the most common) ports at risk of exploitation – then moves on to automated security analysis and then risk-based reporting.


How to Run a Generative AI Developer Tooling Experiment

The last metric, rework, is a significant consideration with generative AI, as 67% of developers find that they are spending more time debugging AI-generated code. Devexperts experienced a 200% increase in rework and a 30% increase in maintenance. On the other hand, while the majority of organizations are seeing an increase in complexity and lines of code with code generators, these five engineers saw a surprising 15% decrease in lines of code created. “We can conclude that, for the live experiment, GitHub Copilot didn’t deliver the results one could expect after reading their articles,” summarized German Tebiev, the software engineering process architect who ran the experiment. He did think the results were persuasive enough to believe speed will be enabled if the right processes are put in place: “The fact that the PR throughput shows significant growth tells us that the desired speed increase can be achieved if the tasks’ flow is handled effectively.” ... Just 17% of developers responded that they think Copilot helped them save at least an hour a week, versus a whopping 40% saw no time savings by using the code generator, which is well below the industry average. Developers were also able to share their own anecdotal experience, which is very situation-dependent. Copilot seemed to be a better choice for completing more basic lines of code for new features, less so when there’s complexity of working with an existing codebase.


Dark Data: Surprising Places for Business Insights

Previously, the biggest problem when dealing with dark data was its messy nature. Even though AI has been able to analyze structured data for years, unstructured or semi-structured data proved to be a hard nut to crack. Unfortunately, unstructured data constitutes the majority of dark data. However, recent advances in natural language programming (NLP), natural language understanding (NLU), speech recognition, and ML have enabled AI to deal with unstructured dark data more effectively. Today, AI can easily analyze raw inputs like customer reviews, social media comments to identify trends and sentiment. Advanced sentiment analysis algorithms can come to accurate conclusions when concerning tone, context, emotional nuances, sarcasm, and urgency, providing businesses with deeper audience insights. For instance, Amazon uses this approach to flag fake reviews. In finance and banking, AI-powered data analysis tools are used to process transaction logs and unstructured customer communications to identify fraud risks and enhance service and customer satisfaction. Another industry where dark data mining might have potentially huge social benefits is healthcare. Currently, this industry generates around 30% of all the data in the world. 


Is your AI product actually working? How to develop the right metric system

Not tracking whether your product is working well is like landing a plane without any instructions from air traffic control. There is absolutely no way that you can make informed decisions for your customer without knowing what is going right or wrong. Additionally, if you do not actively define the metrics, your team will identify their own back-up metrics. The risk of having multiple flavors of an ‘accuracy’ or ‘quality’ metric is that everyone will develop their own version, leading to a scenario where you might not all be working toward the same outcome. ... the complexity of operating an ML product with multiple customers translates to defining metrics for the model, too. What do I use to measure whether a model is working well? Measuring the outcome of internal teams to prioritize launches based on our models would not be quick enough; measuring whether the customer adopted solutions recommended by our model could risk us drawing conclusions from a very broad adoption metric ... Most metrics are gathered at-scale by new instrumentation via data engineering. However, in some instances (like question 3 above) especially for ML based products, you have the option of manual or automated evaluations that assess the model outputs. 


Security needs to be planned and discussed early, right at the product ideation stage

On the open-source side, where a lot of supply chain risks emerge, we leverage a state-of-the-art development pipeline. Developers follow strict security guidelines and use frameworks embedded with security tools that detect risks associated with third-party libraries—both during development and at runtime. We also have robust monitoring systems in place to detect vulnerabilities and active exploits. ... Monitoring technological events is one part, but from a core product perspective, we also need to monitor risk-based activities, like transactions that could potentially lead to fraud. For that, we have strong AI/ML developments already deployed, with a dedicated AI and data science team constantly building new algorithms to detect fraudulent actions. From a product standpoint, the system is quite mature. On the technology side, monitoring has some automation powered by AI, and we’ve also integrated tools like GitHub Copilot. Our analysts, developers, and security engineers use these technologies to quickly identify potential issues, reducing manual effort significantly. ... Security needs to be planned and discussed early—right at the product ideation stage with product managers—so that it doesn’t become a blocker at the end. Early involvement makes it much easier and avoids last-minute disruptions.


14 tiny tricks for big cloud savings

Good algorithms can boost the size of your machine when demand peaks. But clouds don’t always make it easy to shrink all the resources on disk. If your disks grow, they can be hard to shrink. By monitoring these machines closely, you can ensure that your cloud instances consume only as much as they need and no more. ... Cloud providers can offer significant discounts for organizations that make a long-term commitment to using hardware. These are sometimes called reserved instances, or usage-based discounts. They can be ideal when you know just how much you’ll need for the next few years. The downside is that the commitment locks in both sides of the deal. You can’t just shut down machines in slack times or when a project is canceled. ... Programmers like to keep data around in case they might ever need it again. That’s a good habit until your app starts scaling and it’s repeated a bazillion times. If you don’t call the user, do you really need to store their telephone number? Tossing personal data aside not only saves storage fees but limits the danger of releasing personally identifiable information. Stop keeping extra log files or backups of data that you’ll never use again. ... Cutting back on some services will save money, but the best way to save cash is to go cold turkey. There’s nothing stopping you from dumping your data into a hard disk on your desk or down the hall in a local data center. 


Two-thirds of jobs will be impacted by AI

“Most jobs will change dramatically in the next three to four years, at least as much as the internet has changed jobs over the last 30,” Calhoon said. “Every job posted on Indeed today, from truck driver to physician to software engineer, will face some level of exposure to genAI-driven change.” ... What will emerge is a “symbiotic” relationship with an increasingly “proactive” technology that will require employees to constantly learn new skills and adapt. “AI can manage repetitive tasks, or even difficult tasks that are specific in nature, while humans can focus on innovative and strategic initiatives that drive revenue growth and improve overall business performance,” Hoffman said in an interview earlier this year. “AI is also much quicker than humans could possibly be, is available 24/7, and can be scaled to handle increasing workloads.” As AI takes over repetitive tasks, workers will shift toward roles that involve overseeing AI, solving unique problems, and applying creativity and strategy. Teams will increasingly collaborate with AI—like marketers personalizing content or developers using AI copilots. Rather than replacing humans, AI will enhance human strengths such as decision-making and emotional intelligence. Adapting to this change will require ongoing learning and a fresh approach to how work is done.

Daily Tech Digest - April 27, 2025


Quote for the day:

“Most new jobs won’t come from our biggest employers. They will come from our smallest. We’ve got to do everything we can to make entrepreneurial dreams a reality.” -- Ross Perot



7 key strategies for MLops success

Like many things in life, in order to successfully integrate and manage AI and ML into business operations, organisations first need to have a clear understanding of the foundations. The first fundamental of MLops today is understanding the differences between generative AI models and traditional ML models. Cost is another major differentiator. The calculations of generative AI models are more complex resulting in higher latency, demand for more computer power, and higher operational expenses. Traditional models, on the other hand, often utilise pre-trained architectures or lightweight training processes, making them more affordable for many organisations. ... Creating scalable and efficient MLops architectures requires careful attention to components like embeddings, prompts, and vector stores. Fine-tuning models for specific languages, geographies, or use cases ensures tailored performance. An MLops architecture that supports fine-tuning is more complicated and organisations should prioritise A/B testing across various building blocks to optimise outcomes and refine their solutions. Aligning model outcomes with business objectives is essential. Metrics like customer satisfaction and click-through rates can measure real-world impact, helping organisations understand whether their models are delivering meaningful results. 


If we want a passwordless future, let's get our passkey story straight

When passkeys work, which is not always the case, they can offer a nearly automagical experience compared to the typical user ID and password workflow. Some passkey proponents like to say that passkeys will be the death of passwords. More realistically, however, at least for the next decade, they'll mean the death of some passwords -- perhaps many passwords. We'll see. Even so, the idea of killing passwords is a very worthy objective. ... With passkeys, the device that the end user is using – for example, their desktop computer or smartphone -- is the one that's responsible for generating the public/private key pair as a part of an initial passkey registration process. After doing so, it shares the public key – the one that isn't a secret – with the website or app that the user wants to login to. The private key -- the secret -- is never shared with that relying party. This is where the tech article above has it backward. It's not "the site" that "spits out two pieces of code" saving one on the server and the other on your device. ... Passkeys have a long way to go before they realize their potential. Some of the current implementations are so alarmingly bad that it could delay their adoption. But adoption of passkeys is exactly what's needed to finally curtail a decades-long crime spree that has plagued the internet. 



AI: More Buzzword Than Breakthrough

While Artificial Intelligence focuses on creating systems that simulate human intelligence, Intelligent Automation leverages these AI capabilities to automate end-to-end business processes. In essence, AI is the brain that provides cognitive functions, while Intelligent Automation is the body that executes tasks using AI’s intelligence. This distinction is critical; although Artificial Intelligence is a component of Intelligent Automation, not all AI applications result in automation, and not all automation requires advanced Artificial Intelligence. ... Intelligent Automation automates and optimizes business processes by combining AI with automation tools. This integration results in increased efficiency and reduced operating costs. For instance, Intelligent Automation can streamline supply chain operations by automating inventory management, order fulfillment, and logistics, resulting in faster turnaround times and fewer errors. ... In recent years, the term “AI” has been widely used as a marketing buzzword, often applied to technologies that do not have true AI capabilities. This phenomenon, sometimes referred to as “AI washing,” involves branding traditional automation or data processing systems as AI in order to capitalize on the term’s popularity. Such practices can mislead consumers and businesses, leading to inflated expectations and potential disillusionment with the technology.


Introduction to API Management

API gateways are pivotal in managing both traffic and security for APIs. They act as the frontline interface between APIs and the users, handling incoming requests and directing them to the appropriate services. API gateways enforce policies such as rate limiting and authentication, ensuring secure and controlled access to API functions. Furthermore, they can transform and route requests, collect analytics data and provide caching capabilities. ... With API governance, businesses get the most out of their investment. The purpose of API governance is to make sure that APIs are standardized so that they are complete, compliant and consistent. Effective API governance enables organizations to identify and mitigate API-related risks, including performance concerns, compliance issues and security vulnerabilities. API governance is complex and involves security, technology, compliance, utilization, monitoring, performance and education. Organizations can make their APIs secure, efficient, compliant and valuable to users by following best practices in these areas. ... Security is paramount in API management. Advanced security features include authentication mechanisms like OAuth, API keys and JWT (JSON Web Tokens) to control access. Encryption, both in transit and at rest, ensures data integrity and confidentiality.


Sustainability starts within: Flipkart & Furlenco on building a climate-conscious culture

Based on the insights from Flipkart and Furlenco, here are six actionable steps for leaders seeking to embed climate goals into their company culture: Lead with intent: Make climate goals a strategic priority, not just a CSR initiative. Signal top-level commitment and allocate leadership roles accordingly. Operationalise sustainability: Move beyond policies into process design — from green supply chains to net-zero buildings and water reuse systems. Make It measurable: Integrate climate-related KPIs into team goals, performance reviews, and business dashboards. Empower employees: Create space for staff to lead climate initiatives, volunteer, learn, and innovate. Build purpose into daily roles. Foster dialogue and storytelling: Share wins, losses, and journeys. Use Earth Day campaigns, internal newsletters, and learning modules to bring sustainability to life. Measure Culture, Not Just Carbon: Assess how employees feel about their role in climate action — through surveys, pulse checks, and feedback loops. ... Beyond the company walls, this cultural approach to climate leadership has ripple effects. Customers are increasingly drawn to brands with strong environmental values, investors are rewarding companies with robust ESG cultures, and regulators are moving from voluntary frameworks to mandatory disclosures.


Proof-of-concept bypass shows weakness in Linux security tools

An Israeli vendor was able to evade several leading Linux runtime security tools using a new proof-of-concept (PoC) rootkit that it claims reveals the limitations of many products in this space. The work of cloud and Kubernetes security company Armo, the PoC is called ‘Curing’, a portmanteau word that combines the idea of a ‘cure’ with the io_uring Linux kernel interface that the company used in its bypass PoC. Using Curing, Armo found it was possible to evade three Linux security tools to varying degrees: Falco (created by Sysdig but now a Cloud Native Computing Foundation graduated project), Tetragon from Isovalent (now part of Cisco), and Microsoft Defender. ... Armo said it was motivated to create the rootkit to draw attention to two issues. The first was that, despite the io_uring technique being well documented for at least two years, vendors in the Linux security space had yet to react to the danger. The second purpose was to draw attention to deeper architectural challenges in the design of the Linux security tools that large numbers of customers rely on to protect themselves: “We wanted to highlight the lack of proper attention in designing monitoring solutions that are forward-compatible. Specifically, these solutions should be compatible with new features in the Linux kernel and address new techniques,” said Schendel.


Insider threats could increase amid a chaotic cybersecurity environment

Most organisations have security plans and policies in place to decrease the potential for insider threats. No policy will guarantee immunity to data breaches and IT asset theft but CISOs can make sure their policies are being executed through routine oversight and audits. Best practices include access control and least privilege, which ensures employees, contractors and all internal users only have access to the data and systems necessary for their specific roles. Regular employee training and awareness programmes are also critical. Training sessions are an effective means to educate employees on security best practices such as how to recognise phishing attempts, social engineering attacks and the risks associated with sharing sensitive information. Employees should be trained in how to report suspicious activities – and there should be a defined process for managing these reports. Beyond the security controls noted above, those that govern the IT asset chain of custody are crucial to mitigating the fallout of a breach should assets be stolen by employees, former employees or third parties. The IT asset chain of custody refers to the process that tracks and documents the physical possession, handling and movement of IT assets throughout their lifecycle. A sound programme ensures that there is a clear, auditable trail of who has access to and controls the asset at any given time. 


Distributed Cloud Computing: Enhancing Privacy with AI-Driven Solutions

AI has the potential to play a game-changing role in distributed cloud computing and PETs. By enabling intelligent decision-making and automation, AI algorithms can help us optimize data processing workflows, detect anomalies, and predict potential security threats. AI has been instrumental in helping us identify patterns and trends in complex data sets. We're excited to see how it will continue to evolve in the context of distributed cloud computing. For instance, homomorphic encryption allows computations to be performed on encrypted data without decrypting it first. This means that AI models can process and analyze encrypted data without accessing the underlying sensitive information. Similarly, AI can be used to implement differential privacy, a technique that adds noise to the data to protect individual records while still allowing for aggregate analysis. In anomaly detection, AI can identify unusual patterns or outliers in data without requiring direct access to individual records, ensuring that sensitive information remains protected. While AI offers powerful capabilities within distributed cloud environments, the core value proposition of integrating PETs remains in the direct advantages they provide for data collaboration, security, and compliance. Let's delve deeper into these key benefits, challenges and limitations of PETs in distributed cloud computing.


Mobile Applications: A Cesspool of Security Issues

"What people don't realize is you ship your entire mobile app and all your code to this public store where any attacker can download it and reverse it," Hoog says. "That's vastly different than how you develop a Web app or an API, which sit behind a WAF and a firewall and servers." Mobile platforms are difficult for security researchers to analyze, Hoog says. One problem is that developers rely too much on the scanning conducted by Apple and Google on their app stores. When a developer loads an application, either company will conduct specific scans to detect policy violations and to make malicious code more difficult to upload to the repositories. However, developers often believe the scanning is looking for security issues, but it should not be considered a security control, Hoog says. "Everybody thinks Apple and Google have tested the apps — they have not," he says. "They're testing apps for compliance with their rules. They're looking for malicious malware and just egregious things. They are not testing your application or the apps that you use in the way that people think." ... In addition, security issues on mobile devices tend to have a much shorter lifetime, because of the closed ecosystems and the relative rarity of jailbreaking. When NowSecure finds a problem, there is no guarantee that it will last beyond the next iOS or Android update, he says.


The future of testing in compliance-heavy industries

In today’s fast-evolving technology landscape, being an engineering leader in compliance-heavy industries can be a struggle. Managing risks and ensuring data integrity are paramount, but the dangers are constant when working with large data sources and systems. Traditional integration testing within the context of stringent regulatory requirements is more challenging to manage at scale. This leads to gaps, such as insufficient test coverage across interconnected systems, a lack of visibility into data flows, inadequate logging, and missed edge case conditions, particularly in third-party interactions. Due to these weaknesses, security vulnerabilities can pop up and incident response can be delayed, ultimately exposing organizations to violations and operational risk. ... API contract testing is a modern approach used to validate the expectations between different systems, making sure that any changes in APIs don’t break expectations or contracts. Changes might include removing or renaming a field and altering data types or response structures. These seemingly small updates can cause downstream systems to crash or behave incorrectly if they are not properly communicated or validated ahead of time. ... The shifting left practice has a lesser-known cousin: shifting right. Shifting right focuses on post-deployment validation using concepts such as observability and real-time monitoring techniques.

Daily Tech Digest - April 25, 2025


Quote for the day:

"Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it." -- Johann Wolfgang von Goethe


Revolutionizing Application Security: The Plea for Unified Platforms

“Shift left” is a practice that focuses on addressing security risks earlier in the development cycle, before deployment. While effective in theory, this approach has proven problematic in practice as developers and security teams have conflicting priorities. ... Cloud native applications are dynamic; constantly deployed, updated and scaled, so robust real-time protection measures are absolutely necessary. Every time an application is updated or deployed, new code, configurations or dependencies appear, all of which can introduce new vulnerabilities. The problem is that it is difficult to implement real-time cloud security with a traditional, compartmentalized approach. Organizations need real-time security measures that provide continuous monitoring across the entire infrastructure, detect threats as they emerge and automatically respond to them. As Tager explained, implementing real-time prevention is necessary “to stay ahead of the pace of attackers.” ... Cloud native applications tend to rely heavily on open source libraries and third-party components. In 2021, Log4j’s Log4Shell vulnerability demonstrated how a single compromised component could affect millions of devices worldwide, exposing countless enterprises to risk. Effective application security now extends far beyond the traditional scope of code scanning and must reflect the modern engineering environment. 


AI-Powered Polymorphic Phishing Is Changing the Threat Landscape

Polymorphic phishing is an advanced form of phishing campaign that randomizes the components of emails, such as their content, subject lines, and senders’ display names, to create several almost identical emails that only differ by a minor detail. In combination with AI, polymorphic phishing emails have become highly sophisticated, creating more personalized and evasive messages that result in higher attack success rates. ... Traditional detection systems group phishing emails together to enhance their detection efficacy based on commonalities in phishing emails, such as payloads or senders’ domain names. The use of AI by cybercriminals has allowed them to conduct polymorphic phishing campaigns with subtle but deceptive variations that can evade security measures like blocklists, static signatures, secure email gateways (SEGs), and native security tools. For example, cybercriminals modify the subject line by adding extra characters and symbols, or they can alter the length and pattern of the text. ... The standard way of grouping individual attacks into campaigns to improve detection efficacy will become irrelevant by 2027. Organizations need to find alternative measures to detect polymorphic phishing campaigns that don’t rely on blocklists and that can identify the most advanced attacks.


Does AI Deserve Worker Rights?

Chalmers et al declare that there are three things that AI-adopting institutions can do to prepare for the coming consciousness of AI: “They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern.” What would “an appropriate level of moral concern” actually look like? According to Kyle Fish, Anthropic’s AI welfare researcher, it could take the form of allowing an AI model to stop a conversation with a human if the conversation turned abusive. “If a user is persistently requesting harmful content despite the model’s refusals and attempts at redirection, could we allow the model simply to end that interaction?” Fish told the New York Times in an interview. What exactly would model welfare entail? The Times cites a comment made in a podcast last week by podcaster Dwarkesh Patel, who compared model welfare to animal welfare, stating it was important to make sure we don’t reach “the digital equivalent of factory farming” with AI. Considering Nvidia CEO Jensen Huang’s desire to create giant “AI factories” filled with millions of his company’s GPUs cranking through GenAI and agentic AI workflows, perhaps the factory analogy is apropos.


Cybercriminals switch up their top initial access vectors of choice

“Organizations must leverage a risk-based approach and prioritize vulnerability scanning and patching for internet-facing systems,” wrote Saeed Abbasi, threat research manager at cloud security firm Qualys, in a blog post. “The data clearly shows that attackers follow the path of least resistance, targeting vulnerable edge devices that provide direct access to internal networks.” Greg Linares, principal threat intelligence analyst at managed detection and response vendor Huntress, said, “We’re seeing a distinct shift in how modern attackers breach enterprise environments, and one of the most consistent trends right now is the exploitation of edge devices.” Edge devices, ranging from firewalls and VPN appliances to load balancers and IoT gateways, serve as the gateway between internal networks and the broader internet. “Because they operate at this critical boundary, they often hold elevated privileges and have broad visibility into internal systems,” Linares noted, adding that edge devices are often poorly maintained and not integrated into standard patching cycles. Linares explained: “Many edge devices come with default credentials, exposed management ports, secret superuser accounts, or weakly configured services that still rely on legacy protocols — these are all conditions that invite intrusion.”


5 tips for transforming company data into new revenue streams

Data monetization can be risky, particularly for organizations that aren’t accustomed to handling financial transactions. There’s an increased threat of security breaches as other parties become aware that you’re in possession of valuable information, ISG’s Rudy says. Another risk is unintentionally using data you don’t have a right to use or discovering that the data you want to monetize is of poor quality or doesn’t integrate across data sets. Ultimately, the biggest risk is that no one wants to buy what you’re selling. Strong security is essential, Agility Writer’s Yong says. “If you’re not careful, you could end up facing big fines for mishandling data or not getting the right consent from users,” he cautions. If a data breach occurs, it can deeply damage an enterprise’s reputation. “Keeping your data safe and being transparent with users about how you use their info can go a long way in avoiding these costly mistakes.” ... “Data-as-a-service, where companies compile and package valuable datasets, is the base model for monetizing data,” he notes. However, insights-as-a-service, where customers provide prescriptive/predictive modeling capabilities, can demand a higher valuation. Another consideration is offering an insights platform-as-a-service, where subscribers can securely integrate their data into the provider’s insights platform.


Are AI Startups Faking It Till They Make It?

"A lot of VC funds are just kind of saying, 'Hey, this can only go up.' And that's usually a recipe for failure - when that starts to happen, you're becoming detached from reality," Nnamdi Okike, co-founder and managing partner at 645 Ventures, told Tradingview. Companies are branding themselves as AI-driven, even when their core technologies lack substantive AI components. A 2019 study by MMC Ventures found 40% of surveyed "AI startups" in Europe showed no evidence of AI integration in their products or services. And this was before OpenAI further raised the stakes with the launch of ChatGPT in 2022. It's a slippery slope. Even industry behemoths have had to clarify the extent of their AI involvement. Last year, tech giant and the fourth-most richest company in the world Amazon pushed back on allegations that its AI-powered "Just Walk Out" technology installed at its physical grocery stores for a cashierless checkout was largely being driven by around 1,000 workers in India who manually checked almost three quarters of the transactions. Amazon termed these reports "erroneous" and "untrue," adding that the staff in India were not reviewing live footage from the stores but simply reviewing the system. The incentive to brand as AI-native has only intensified. 


From deployment to optimisation: Why cloud management needs a smarter approach

As companies grow, so does their cloud footprint. Managing multiple cloud environments—across AWS, Azure, and GCP—often results in fragmented policies, security gaps, and operational inefficiencies. A Multi-Cloud Maturity Research Report by Vanson Bourne states that nearly 70% of organisations struggle with multi-cloud complexity, despite 95% agreeing that multi-cloud architectures are critical for success. Companies are shifting away from monolithic architecture to microservices, but managing distributed services at scale remains challenging. ... Regulatory requirements like SOC 2, HIPAA, and GDPR demand continuous monitoring and updates. The challenge is not just staying compliant but ensuring that security configurations remain airtight. IBM’s Cost of a Data Breach Report reveals that the average cost of a data breach in India reached ₹195 million in 2024, with cloud misconfiguration accounting for 12% of breaches. The risk is twofold: businesses either overprovision resources—wasting money—or leave environments under-secured, exposing them to breaches. Cyber threats are also evolving, with attackers increasingly targeting cloud environments. Phishing and credential theft accounted for 18% of incidents each, according to the IBM report. 


Inside a Cyberattack: How Hackers Steal Data

Once a hacker breaches the perimeter the standard practice is to beachhead, and then move laterally to find the organisation’s crown jewels: their most valuable data. Within a financial or banking organisation it is likely there is a database on their server that contains sensitive customer information. A database is essentially a complicated spreadsheet, wherein a hacker can simply click SELECT and copy everything. In this instance data security is essential, however, many organisations confuse data security with cybersecurity. Organisations often rely on encryption to protect sensitive data, but encryption alone isn’t enough if the decryption keys are poorly managed. If an attacker gains access to the decryption key, they can instantly decrypt the data, rendering the encryption useless. ... To truly safeguard data, businesses must combine strong encryption with secure key management, access controls, and techniques like tokenisation or format-preserving encryption to minimise the impact of a breach. A database protected by Privacy Enhancing Technologies (PETs), such as tokenisation, becomes unreadable to hackers if the decryption key is stored offsite. Without breaching the organisation’s data protection vendor to access the key, an attacker cannot decrypt the data – making the process significantly more complicated. This can be a major deterrent to hackers.


Why Testing is a Long-Term Investment for Software Engineers

At its core, a test is a contract. It tells the system—and anyone reading the code—what should happen when given specific inputs. This contract helps ensure that as the software evolves, its expected behavior remains intact. A system without tests is like a building without smoke detectors. Sure, it might stand fine for now, but the moment something catches fire, there’s no safety mechanism to contain the damage. ... Over time, all code becomes legacy. Business requirements shift, architectures evolve, and what once worked becomes outdated. That’s why refactoring is not a luxury—it’s a necessity. But refactoring without tests? That’s walking blindfolded through a minefield. With a reliable test suite, engineers can reshape and improve their code with confidence. Tests confirm that behavior hasn’t changed—even as the internal structure is optimized. This is why tests are essential not just for correctness, but for sustainable growth. ... There’s a common myth: tests slow you down. But seasoned engineers know the opposite is true. Tests speed up development by reducing time spent debugging, catching regressions early, and removing the need for manual verification after every change. They also allow teams to work independently, since tests define and validate interfaces between components.


Why the road from passwords to passkeys is long, bumpy, and worth it - probably

While the current plan rests on a solid technical foundation, many important details are barriers to short-term adoption. For example, setting up a passkey for a particular website should be a rather seamless process; however, fully deactivating that passkey still relies on a manual multistep process that has yet to be automated. Further complicating matters, some current user-facing implementations of passkeys are so different from one another that they're likely to confuse end-users looking for a common, recognizable, and easily repeated user experience. ... Passkey proponents talk about how passkeys will be the death of the password. However, the truth is that the password died long ago -- just in a different way. We've all used passwords without considering what is happening behind the scenes. A password is a special kind of secret -- a shared or symmetric secret. For most online services and applications, setting a password requires us to first share that password with the relying party, the website or app operator. While history has proven how shared secrets can work well in very secure and often temporary contexts, if the HaveIBeenPawned.com website teaches us anything, it's that site and app authentication isn't one of those contexts. Passwords are too easily compromised.

Daily Tech Digest - April 24, 2025


Quote for the day:

“Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability.” -- Patrick Lencioni



Algorithm can make AI responses increasingly reliable with less computational overhead

The algorithm uses the structure according to which the language information is organized in the AI's large language model (LLM) to find related information. The models divide the language information in their training data into word parts. The semantic and syntactic relationships between the word parts are then arranged as connecting arrows—known in the field as vectors—in a multidimensional space. The dimensions of space, which can number in the thousands, arise from the relationship parameters that the LLM independently identifies during training using the general data. ... Relational arrows pointing in the same direction in this vector space indicate a strong correlation. The larger the angle between two vectors, the less two units of information relate to one another. The SIFT algorithm developed by ETH researchers now uses the direction of the relationship vector of the input query (prompt) to identify those information relationships that are closely related to the question but at the same time complement each other in terms of content. ... By contrast, the most common method used to date for selecting the information suitable for the answer, known as the nearest neighbor method, tends to accumulate redundant information that is widely available. The difference between the two methods becomes clear when looking at an example of a query prompt that is composed of several pieces of information.


Bring Your Own Malware: ransomware innovates again

The approach taken by DragonForce and Anubis shows that cybercriminals are becoming increasingly sophisticated in the way they market their services to potential affiliates. This marketing approach, in which DragonForce positions itself as a fully-fledged service platform and Anubis offers different revenue models, reflects how ransomware operators behave like “real” companies. Recent research has also shown that some cybercriminals even hire pentesters to test their ransomware for vulnerabilities before deploying it. So it’s not just dark web sites or a division of tasks, but a real ecosystem of clear options for “consumers.” We may also see a modernization of dark web forums, which currently resemble the online platforms of the 2000s. ... Although these developments in the ransomware landscape are worrying, Secureworks researchers also offer practical advice for organizations to protect themselves. Above all, defenders must take “proactive preventive” action. Fortunately and unfortunately, this mainly involves basic measures. Fortunately, because the policies to be implemented are manageable; unfortunately, because there is still a lack of universal awareness of such security practices. In addition, organizations must develop and regularly test an incident response plan to quickly remediate ransomware activities.


Phishing attacks thrive on human behaviour, not lack of skill

Phishing draws heavily from principles of psychology and classic social engineering. Attacks often play on authority bias, prompting individuals to comply with requests from supposed authority figures, such as IT personnel, management, or established brands. Additionally, attackers exploit urgency and scarcity by sending warnings of account suspensions or missed payments, and manipulate familiarity by referencing known organisations or colleagues. Psychologs has explained that many phishing techniques bear resemblance to those used by traditional confidence tricksters. These attacks depend on inducing quick, emotionally-driven decisions that can bypass normal critical thinking defences. The sophistication of phishing is furthered by increasing use of data-driven tactics. As highlighted by TechSplicer, attackers are now gathering publicly available information from sources like LinkedIn and company websites to make their phishing attempts appear more credible and tailored to the recipient. Even experienced professionals often fall for phishing attacks, not due to a lack of intelligence, but because high workload, multitasking, or emotional pressure make it difficult to properly scrutinise every communication. 

What Steve Jobs can teach us about rebranding

Humans like to think of themselves as rational animals, but it comes as no news to marketers that we are motivated to a greater extent by emotions. Logic brings us to conclusions; emotion brings us to action. Whether we are creating a poem or a new brand name, we won’t get very far if we treat the task as an engineering exercise. True, names are formed by putting together parts, just as poems are put together with rhythmic patterns and with rhyming lines, but that totally misses what is essential to a name’s success or a poem’s success. Consider Microsoft and Apple as names. One is far more mechanical, and the other much more effective at creating the beginning of an experience. While both companies are tremendously successful, there is no question that Apple has the stronger, more emotional experience. ... Different stakeholders care about different things. Employees need inspiration; investors need confidence; customers need clarity on what’s in it for them. Break down these audiences and craft tailored messages for each group. Identifying the audience groups can be challenging. While the first layer is obvious—customers, employees, investors, and analysts—all these audiences are easy to find and message. However, what is often overlooked is the individuals in those audiences who can more positively influence the rebrand. It may be a particular journalist, or a few select employees. 


Coaching AI agents: Why your next security hire might be an algorithm

Like any new team member, AI agents need onboarding before operating at maximum efficacy. Without proper onboarding, they risk misclassifying threats, generating excessive false positives, or failing to recognize subtle attack patterns. That’s why more mature agentic AI systems will ask for access to internal documentation, historical incident logs, or chat histories so the system can study them and adapt to the organization. Historical security incidents, environmental details, and incident response playbooks serve as training material, helping it recognize threats within an organization’s unique security landscape. Alternatively, these details can help the agentic system recognize benign activity. For example, once the system knows what are allowed VPN services or which users are authorized to conduct security testing, it will know to mark some alerts related to those services or activities as benign. ... Adapting AI isn’t a one-time event, it’s an ongoing process. Like any team member, agentic AI deployments improve through experience, feedback, and continuous refinement. The first step is maintaining human-in-the-loop oversight. Like any responsible manager, security analysts must regularly review AI-generated reports, verify key findings, and refine conclusions when necessary. 


Cyber insurance is no longer optional, it’s a strategic necessity

Once the DPDPA fully comes into effect, it will significantly alter how companies approach data protection. Many enterprises are already making efforts to manage their exposure, but despite their best intentions, they can still fall victim to breaches. We anticipate that the implementation of DPDPA will likely lead to an increase in the uptake of cyber insurance. This is because the Act clearly outlines that companies may face penalties in the event of a data breach originating from their environment. Since cyber insurance policies often include coverage for fines and penalties, this will become an increasingly important risk-transfer tool. ... The critical question has always been: how can we accurately quantify risk exposure? Specifically, if a certain event were to occur, what would be the financial impact? Today, there are advanced tools and probabilistic models available that allow organisations to answer this question with greater precision. Scenario analyses can now be conducted to simulate potential events and estimate the resulting financial impact. This, in turn, helps enterprises determine the appropriate level of insurance coverage, making the process far more data-driven and objective. Post-incident technology also plays a crucial role in forensic analysis. When an incident occurs, the immediate focus is on containment. 


Adversary-in-the-Middle Attacks Persist – Strategies to Lessen the Impact

One of the most recent examples of an AiTM attack is the attack on Microsoft 365 with the PhaaS toolkit Rockstar 2FA, an updated version of the DadSec/Phoenix kit. In 2024, a Microsoft employee accessed an attachment that led them to a phony website where they authenticated the attacker’s identity through the link. In this instance, the employee was tricked into performing an identity verification session, which granted the attacker entry to their account. ... As more businesses move online, from banks to critical services, fraudsters are more tempted by new targets. The challenges often depend on location and sector, but one thing is clear: Fraud operates without limitations. In the United States, AiTM fraud is progressively targeting financial services, e-commerce and iGaming. For financial services, this means that cybercriminals are intercepting transactions or altering payment details, inducing hefty losses. Concerning e-commerce and marketplaces, attackers are exploiting vulnerabilities to intercept and modify transactions through data manipulation, redirecting payments to their accounts. ... As technology advances and fraud continues to evolve with it, we face the persistent challenge of increased fraudster sophistication, threatening businesses of all sizes. 


From legacy to lakehouse: Centralizing insurance data with Delta Lake

Centralizing data and creating a Delta Lakehouse architecture significantly enhances AI model training and performance, yielding more accurate insights and predictive capabilities. The time-travel functionality of the delta format enables AI systems to access historical data versions for training and testing purposes. A critical consideration emerges regarding enterprise AI platform implementation. Modern AI models, particularly large language models, frequently require real-time data processing capabilities. The machine learning models would target and solve for one use case, but Gen AI has the capability to learn and address multiple use cases at scale. In this context, Delta Lake effectively manages these diverse data requirements, providing a unified data platform for enterprise GenAI initiatives. ... This unification of data engineering, data science and business intelligence workflows contrasts sharply with traditional approaches that required cumbersome data movement between disparate systems (e.g., data lake for exploration, data warehouse for BI, separate ML platforms). Lakehouse creates a synergistic ecosystem, dramatically accelerating the path from raw data collection to deployed AI models generating tangible business value, such as reduced fraud losses, faster claims settlements, more accurate pricing and enhanced customer relationships.


How AI and Data-Driven Decision Making Are Reshaping IT Ops

Rather than relying on intuition, IT decision-makers now lean on insights drawn from operational data, customer feedback, infrastructure performance, and market trends. The objective is simple: make informed decisions that align with broader business goals while minimizing risk and maximizing operational efficiency. With the help of analytics platforms and business intelligence tools, these insights are often transformed into interactive dashboards and visual reports, giving IT teams real-time visibility into performance metrics, system anomalies, and predictive outcomes. A key evolution in this approach is the use of predictive intelligence. Traditional project and service management often fall short when it comes to anticipating issues or forecasting success. ... AI also helps IT teams uncover patterns that are not immediately visible to the human eye. Predictive models built on historical performance data allow organizations to forecast demand, manage workloads more efficiently, and preemptively resolve issues before they disrupt service. This shift not only reduces downtime but also frees up resources to drive innovation across the enterprise. Moreover, companies that embrace data as a core business asset tend to nurture a culture of curiosity and informed experimentation. 


The DFIR Investigative Mindset: Brett Shavers On Thinking Like A Detective

You must be technical. You have to be technically proficient. You have to be able to do the actual technical work. And I’m not to rely on- not to bash a vendor training for a tool training, you have to have tool training, but you have to have exact training on “This is what the registry is, this is how you pull the-” you have to have that information first. The basics. You gotta have the basics, you have the fundamentals. And a lot of people wanna skip that. ... The DF guys, it’s like a criminal case. It’s “This is the computer that was in the back of the trunk of a car, and that’s what we got.” And the IR side is “This is our system and we set up everything and we can capture what we want. We can ignore what we want.” So if you’re looking at it like “Just in case something is gonna be criminal we might want to prepare a little bit,” right? So that makes DF guys really happy. If they’re coming in after the fact of an IR that becomes a case, a criminal case or a civil litigation where the DF comes in, they go, “Wow, this is nice. You guys have everything preserved, set up as if from the start you were prepared for this.” And it’s “We weren’t really prepared. We were prepared for it, we’re hoping it didn’t happen, we got it.” But I’ve walked in where drives are being wiped on a legal case. 


Daily Tech Digest - April 23, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


MLOps vs. DevOps: Key Differences — and Why They Work Better Together

Arguably, the greatest difference between DevOps and MLOps is that DevOps is, by most definitions, an abstract philosophy, whereas MLOps comes closer to prescribing a distinct set of practices. Ultimately, the point of DevOps is to encourage software developers to collaborate more closely with IT operations teams, based on the idea that software delivery processes are smoother when both groups work toward shared goals. In contrast, collaboration is not a major focus for MLOps. You could argue that MLOps implies that some types of collaboration between different stakeholders — such as data scientists, AI model developers, and model testers — need to be part of MLOps workflows. ... Another key difference is that DevOps centers solely on software development. MLOps is also partly about software development to the extent that model development entails writing software. However, MLOps also addresses other processes — like model design and post-deployment management — that don't overlap closely with DevOps as traditionally defined. ... Differing areas of focus lead to different skill requirements for DevOps versus MLOps. To thrive at DevOps, you must master DevOps tools and concepts like CI/CD and infrastructure-as-code (IaC).


Transforming quality engineering with AI

AI-enabled quality engineering promises to be a game changer, driving a level of precision and efficiency that is beyond the reach of traditional testing. AI algorithms can analyse historical data to identify patterns and predict quality issues, enabling organisations to take early action; machine learning tools detect anomalies with great accuracy, ensuring nothing is missed. Self-healing test scripts update automatically, without manual intervention. Machine Learning models automate test selection, picking the most relevant ones, while reducing both manual effort and errors. In addition, AI can prioritise test cases based on criticality, thus optimising resources and improving testing outcomes. Further, it can integrate with CI/CD pipelines, providing real-time feedback on code quality, and distributing updates automatically to ensure software applications are always ready for deployment. ... AI brings immense value to quality engineering, but also presents a few challenges. To function effectively, algorithms require high-quality datasets, which may not always be available. Organisations will likely need to invest significant resources in acquiring AI talent or building skills in-house. There needs to be a clear plan for integrating AI with existing testing tools and processes. Finally, there are concerns such as protecting data privacy and confidentiality, and implementing Responsible AI.


The Role of AI in Global Governance

Aurora drew parallels with transformative technologies such as electricity and the internet. "If AI reaches some communities late, it sets them far behind," he said. He pointed to Indian initiatives such as Bhashini for language inclusion, e-Sanjeevani for telehealth, Karya for employment through AI annotation and farmer.ai in Baramati, which boosted farmers' incomes by 30% to 40%. Schnorr offered a European perspective, stressing that AI's transformative impact on economies and societies demands trustworthiness. Reflecting on the EU's AI Act, he said its dual aim is fostering innovation while protecting rights. "We're reviewing the Act to ensure it doesn't hinder innovation," Schnorr said, advocating for global alignment through frameworks such as the G7's Hiroshima Code of Conduct and bilateral dialogues with India. He underscored the need for rules to make AI human-centric and accessible, particularly for small and medium enterprises, which form the backbone of both German and Indian economies. ... Singh elaborated on India's push for indigenous AI models. "Funding compute is critical, as training models is resource-intensive. We have the talent and datasets," he said, citing India's second-place ranking in GitHub AI projects per the Stanford AI Index. "Building a foundation model isn't rocket science - it's about providing the right ingredients."


Cisco ThousandEyes: resilient networks start with global insight

To tackle the challenges that arise from (common or uncommon) misconfigurations and other network problems, we need an end-to-end topology, Vaccaro reiterates. ThousandEyes (and Cisco as a whole) have recently put a lot of extra work into this. We saw a good example of this recently during Mobile World Congress. There, ThousandEyes announced Connected Devices. This is intended for service providers and extends their insight into the performance of their customers’ networks in their home environments. The goal, as Vaccaro describes it, is to help service providers see deeper so that they can catch an outage or other disruption quickly, before it impacts customers who might be streaming their favorite show or getting on a work call. ... The Digital Operational Resilience Act (DORA) will be no news to readers who are active in the financial world. You can see DORA as a kind of advanced NIS2, only directly enforced by the EU. It is a collection of best practices that many financial institutions must adhere to. Most of it is fairly obvious. In fact, we would call it basic hygiene when it comes to resilience. However, one component under DORA will have caused financial institutions some stress and will continue to do so: they must now adhere to new expectations when it comes to the services they provide and the resilience of their third-party ICT dependencies.


A Five-Step Operational Maturity Model for Benchmarking Your Team

An operational maturity model is your blueprint for building digital excellence. It gives you the power to benchmark where you are, spot the gaps holding you back and build a roadmap to where you need to be. ... Achieving operational maturity starts with knowing where you are and defining where you want to go. From there, organizations should focus on four core areas: Stop letting silos slow you down. Unify data across tools and teams to enable faster incident resolution and improve collaboration. Integrated platforms and a shared data view reduce context switching and support informed decision-making. Because in today’s fast-moving landscape, fragmented visibility isn’t just inefficient — it’s dangerous. ... Standardize what matters. Automate what repeats. Give your teams clear operational frameworks so they can focus on innovation instead of navigation. Eliminate alert noise and operational clutter that’s holding your teams back. Less noise, more impact. ... Deploy automation and AI across the incident lifecycle, from diagnostics to communication. Prioritize tools that integrate well and reduce manual tasks, freeing teams for higher-value work. ... Use data and automation to minimize disruptions and deliver seamless experiences. Communicate proactively during incidents and apply learnings to prevent future issues.


The Future is Coded: How AI is Rewriting the Rules of Decision Theaters

At the heart of this shift is the blending of generative AI with strategic foresight practices. In the past, planning for the future involved static models and expert intuition. Now, AI models (including advanced neural networks) can churn through reams of historical data and real-time information to project trends and outcomes with uncanny accuracy. Crucially, these AI-powered projections don’t operate in a vacuum – they’re designed to work with human experts. By integrating AI’s pattern recognition and speed with human intuition and domain expertise, organizations create a powerful feedback loop. ... The fusion of generative AI and foresight isn’t confined to tech companies or futurists’ labs – it’s already reshaping industries. For instance, in finance, banks and investment firms are deploying AI to synthesize market signals and predict economic trends with greater accuracy than traditional econometric models. These AI systems can simulate how different strategies might play out under various future market conditions, allowing policymakers in central banks or finance ministries to test interventions before committing to them. The result is a more data-driven, preemptive strategy – allowing decision-makers to adjust course before a forecasted risk materializes. 


More accurate coding: Researchers adapt Sequential Monte Carlo for AI-generated code

The researchers noted that AI-generated code can be powerful, but it can also often lead to code that disregards the semantic rules of programming languages. Other methods to prevent this can distort models or are too time-consuming. Their method makes the LLM adhere to programming language rules by discarding code outputs that may not work early in the process and “allocate efforts towards outputs that more most likely to be valid and accurate.” ... The researchers developed an architecture that brings SMC to code generation “under diverse syntactic and semantic constraints.” “Unlike many previous frameworks for constrained decoding, our algorithm can integrate constraints that cannot be incrementally evaluated over the entire token vocabulary, as well as constraints that can only be evaluated at irregular intervals during generation,” the researchers said in the paper. Key features of adapting SMC sampling to model generation include proposal distribution where the token-by-token sampling is guided by cheap constraints, important weights that correct for biases and resampling which reallocates compute effort towards partial generations. ... AI models have made engineers and other coders work faster and more efficiently. It’s also given rise to a whole new kind of software engineer: the vibe coder. 


You Can't Be in Recovery Mode All the Time — Superna CEO

The proactive approach, he explains, shifts their position in the security lifecycle: "Now we're not responding with a very tiny blast radius and instantly recovering. We are officially left-of-the-boom; we are now ‘the incident never occurred.’" Next, Hesterberg reveals that the next wave of innovation focuses on leveraging the unique visibility his company has in terms of how critical data is accessed. “We have a keen understanding of where your critical data is and what users, what servers, and what services access that data.” From a scanning, patching, and upgrade standpoint, Hesterberg shares that large organizations often face the daunting task of addressing hundreds or even thousands of systems flagged for vulnerabilities daily. To help streamline this process, he says that his team is working on a new capability that integrates with the tools these enterprises already depend on. This upcoming feature will surface, in a prioritized way, the specific servers or services that interact with an organization's most critical data, highlighting the assets that matter most. By narrowing down the list, Hesterberg notes, teams can focus on the most potentially dangerous exposures first. Instead of trying to patch everything, he says, “If you know the 15, 20, or 50 that are most dangerous, potentially most dangerous, you're going to prioritize them.” 


When confusion becomes a weapon: How cybercriminals exploit economic turmoil

Defending against these threats doesn’t start with buying more tools. It starts with building a resilient mindset. In a crisis, security can’t be an afterthought – it must be a guiding principle. Organizations relying on informal workflows or inconsistent verification processes are unknowingly widening their attack surface. To stay ahead, protocols must be defined before uncertainty takes hold. Employees should be trained not just to spot technical anomalies, but to recognize emotional triggers embedded in legitimate looking messages. Resilience, at its core, is about readiness. Not just to respond, but to also anticipate. Organizations that view economic disruption as a dual threat, both financial and cyber, will position themselves to lead with control rather than react in chaos. This means establishing behavioral baselines, implementing layered authentication, and adopting systems that validate not just facilitate. As we navigate continued economic uncertainty, we are reminded once again that cybersecurity is no longer just about technology. It’s about psychology, communication, and foresight. Defending effectively means thinking tactically, staying adaptive, and treating clarity as a strategic asset.


The productivity revolution – enhancing efficiency in the workplace

In difficult economic times, when businesses are tightening the purse strings, productivity improvements may often be overlooked in favour of cost reductions. However, cutting costs is merely a short-term solution. By focusing on sustainable productivity gains, businesses will reap dividends in the long term. To achieve this, organisations must turn their focus to technology. Some technology solutions, such as cloud computing, ERP systems, project management and collaboration tools, produce significant flexibility or performance advantages compared to legacy approaches and processes. Whilst an initial expense, the long-term benefits are often multiples of the investment – cost reductions, time savings, employee motivation, to name just a few. And all of those technology categories are being enhanced with artificial intelligence – for example adding virtual agents to help us do more, quickly. ... At a time when businesses and labour markets are struggling with employee retention and availability, it has become more critical than ever for organisations to focus on effective training and wellbeing initiatives. Minimising staff turnover and building up internal skill sets is vital for businesses looking to improve their key outputs. Getting this right will enable organisations to build smarter and more effective productivity strategies.