Daily Tech Digest - February 08, 2025


Quote for the day:

“There is no failure except in no longer trying.” -- Chris Bradford


Google's DMARC Push Pays Off, but Email Security Challenges Remain

Large email senders are not the only groups quickening the pace of DMARC adoption. The latest Payment Card Industry Data Security Standard (PCI DSS) version 4.0 requires DMARC for all organizations that handle credit card information, while the European Union's Digital Operational Resilience Act (DORA) makes DMARC a necessity for its ability to report on and block email impersonation, Red Sift's Costigan says. "Mandatory regulations and legislation often serve as the tipping point for most organizations," he says. "Failures to do reasonable, proactive cybersecurity — of which email security and DMARC is obviously a part — are likely to meet with costly regulatory actions and the prospect of class action lawsuits." Overall, the authentication specification is working as intended, which explains its arguably rapid adoption, says Roger Grimes, a data-driven-defense evangelist at security awareness and training firm KnowBe4. Other cybersecurity standards, such as DNSSEC and IPSEC, have been around longer, but DMARC adoption has outpaced them, he maintains. "DMARC stands alone as the singular success as the most widely implemented cybersecurity standard introduced in the last decade," Grimes says.


Can Your Security Measures Be Turned Against You?

Over-reliance on certain security products might also allow attackers to extend their reach across various organizations. For example, the recent failure of CrowdStrike’s endpoint detection and response (EDR) tool, which caused widespread global outages, highlights the risks associated with depending too heavily on a single security solution. Although this incident wasn’t the result of a cyber attack, it clearly demonstrates the potential issues that can arise from such reliance. For years, the cybersecurity community has been aware of the risks posed by vulnerabilities in security products. A notable example from 2015 involved a critical flaw in FireEye’s email protection system, which allowed attackers to execute arbitrary commands and potentially take full control of the device. More recently, a vulnerability in Proofpoint’s email security service was exploited in a phishing campaign that impersonated major corporations like IBM and Disney. Windows SmartScreen is designed to shield users from malicious software, phishing attacks, and other online threats. Initially launched with Internet Explorer, SmartScreen has been a core part of Windows since version 8. 


Why Zero Trust Will See Alert Volumes Rocket

As the complexity of zero trust environments grows, so does the need for tools to handle the data explosion. Hypergraphs and generative AI are emerging as game-changers, enabling SOC teams to connect disparate events and uncover hidden patterns. Telemetry collected in zero trust environments is a treasure trove for analytics. Every interaction, whether permitted or denied, is logged, providing the raw material for identifying anomalies. The cybersecurity industry have set standards for exchanging and documenting threat intelligence. By leveraging structured frameworks like MITRE ATT&CK, MITRE DEFEND, and OCSF, activities can be enriched with contextual information enabling better detection and decision-making. Hypergraphs go beyond traditional graphs by representing relationships between multiple events or entities. They can correlate disparate events. For example, a scheduled task combined with denied AnyDesk traffic and browsing to MegaUpload might initially seem unrelated. However, hypergraphs can connect these dots, revealing the signature of a ransomware attack like Akira. By analysing historical patterns, hypergraphs can also predict attack patterns, allowing SOC teams to anticipate the next steps of an attacker and defend proactively.


Capable Protection: Enhancing Cloud-Native Security

Much like in a game of chess, anticipating your opponent’s moves and strategizing accordingly is key to security. Understanding the value and potential risks associated with NHIs and Secrets is the first step towards securing your digital environment. Remediation prioritization plays a crucial role in managing NHIs. The identification and classification process of NHIs enables businesses to react promptly and adequately to any potential vulnerabilities. Furthermore, awareness and education are fundamental to minimize human-induced breaches. ... Cybersecurity must adapt. The traditional, human-centric approach to cybersecurity is inadequate. Integrating an NHI management strategy into your cybersecurity plan is therefore a strategic move. Not only does it enhance an organization’s security posture, but it also facilitates regulatory compliance. Coupled with the potential for substantial cost savings, it’s clear that NHI management is an investment with significant returns. For many organizations, the challenge today lies in striking a balance between speed and security. Rapid deployment of applications and digital services is essential for maintaining competitive advantage, yet this can often be at odds with the need for adequate cybersecurity. 


Attackers Exploit Cryptographic Keys for Malware Deployment

Microsoft recommends developers avoid using machine keys copied from public sources and rotate keys regularly to mitigate risks. The company also removed key samples from its documentation and provided a script for security teams to identify and replace publicly disclosed keys in their environments. Microsoft Defender for Endpoint also includes an alert for publicly exposed ASP.NET machine keys, though the alert itself does not indicate an active attack. Organizations running ASP.NET applications, especially those deployed in web farms, are urged to replace fixed machine keys with auto-generated values stored in the system registry. If a web-facing server has been compromised, rotating the machine keys alone may not eliminate persistent threats. Microsoft said recommends conducting a full forensic investigation to detect potential backdoors or unauthorized access points. In high-risk cases, security teams should consider reformatting and reinstalling affected systems to prevent further exploitation, the report said. Organizations should also implement best practices such as encrypting sensitive configuration files, following secure DevOps procedures and upgrading applications to ASP.NET 4.8. 


The race to AI in 2025: How businesses can harness connectivity to pick up pace

When it comes to optimizing cloud workloads and migrating to available data centers, connectivity is the “make or break” technology. This is why Internet Exchanges (IXs) – physical platforms where multiple networks interconnect to exchange traffic directly with one another via peering – have become indispensable. An IX allows businesses to bypass the public Internet and find the shortest and fastest network pathways for their data, dramatically improving performance and reducing latency for all participants. Importantly, smart use of an IX facility will enable businesses to connect seamlessly to data centers outside of their “home” region, removing geography as a barrier and easing the burden on data center hubs. This form of connectivity is becoming increasingly popular, with the number of IXs in the US surging by more than 350 percent in the past decade. The use of IXs itself is nothing new, but what is relatively new is the neutral model they now employ. A neutral IX isn’t tied to a specific carrier or data center, which means businesses have more connectivity options open to them, increasing redundancy and enhancing resilience. Our own research in 2024 revealed that more than 80 percent of IXs in the US are now data center and carrier-neutral, making it the dominant interconnection model.


The hidden threat of neglected cloud infrastructure

Left unattended for over a decade, malicious actors could have reregistered this bucket to deliver malware or launch devastating supply chain attacks. Fortunately, researchers notified CISA, which promptly secured the vulnerable resource. The incident illustrates how even organizations dedicated to cybersecurity can fall prey to the dangers of neglected digital infrastructure.This story is not an anomaly. It indicates a systemic issue that spans industries, governments, and corporations. ... Entities attempting to communicate with these abandoned assets include government organizations (such as NASA and state agencies in the United States), military networks, Fortune 100 companies, major banks, and universities. The fact that these large organizations were still relying on mismanaged or forgotten resources is a testament to the pervasive nature of this oversight. The researchers emphasized that this issue isn’t specific to AWS, the organizations responsible for these resources, or even a single industry. It reflects a broader systemic failure to manage digital assets effectively in the cloud computing age. The researchers noted the ease of acquiring internet infrastructure—an S3 bucket, a domain name, or an IP address—and a corresponding failure to institute strong governance and life-cycle management for these resources.  


DevOps Evolution: From Movement to Platform Engineering in the AI Era

After nearly 20 years of DevOps, Grabner sees an opportunity to address historical confusion while preserving core principles. “We want to solve the same problem – reduce friction while improving developer and operational efficiency. We want to automate, monitor, and share.” Platform engineering represents this evolution, enabling organizations to scale DevOps best practices through self-service capabilities. “Platform engineering allows us to scale DevOps best practices in an enterprise organization,” Grabner explains. “What platform engineering does is provide self-services to engineers so they can do everything we wanted DevOps to do for us.” At Dynatrace Perform 2025, the company announced several innovations supporting this evolution. The enhanced Davis AI engine now enables preventive operations, moving beyond reactive monitoring to predict and prevent incidents before they occur. This includes AI-powered generation of artifacts for automated remediation workflows and natural language explanations with contextual recommendations. The evolution is particularly evident in how observability is implemented. “Traditionally, observability was always an afterthought,” Grabner explains. 


Bridging the IT Gap: Preparing for a Networking Workforce Evolution

People coming out of university today are far more likely to be experienced in Amazon Web Services (AWS) and Azure than in Border Gateway Protocol (BGP) and Ethernet virtual private network (EVPN). They have spent more time with Kubernetes than with a router or switch command line. Sure, when pressed into action and supported by senior staff or technical documentation, they can perform. But the industry is notorious for its bespoke solutions, snowflake workflows, and poor documentation. None of this ought to be a surprise. At least part of the allure of the cloud for many is that it carries the illusion of pushing problems to another team. Of course, this is hardly true. No company should abdicate architectural and operational responsibility entirely. But in our industry’s rush to new solutions, there are countless teams for which this was an unspoken objective. Regardless, what happens to companies when the people skilled enough to manage the complexity are no longer on call? Perhaps you’re a pessimist and feel that the next generation of IT pros is somehow less capable than in the past. The NASA engineers who landed a man on the moon may have similar things to say about today’s rocket scientists who rely heavily on tools to do the math for them.


A View on Understanding Non-Human Identities Governance

NHIs inherently require connections to other systems and services to fulfill their purpose. This interconnectivity means every NHI becomes a node in a web of interdependencies. From an NHI governance perspective, this necessitates maintaining an accurate and dynamic inventory of these connections to manage the associated risks. For example, if a single NHI is compromised, what does it connect to, and what would an attacker be able to access to laterally move into? Proper NHI governance must include tools to map and monitor these relationships. While there are many ways to go about this manually, what we actually want is an automated way to tell what is connected to what, what is used for what, and by whom. When thinking in terms of securing our systems, we can leverage another important fact about all NHIs in a secured application to build that map, they all, necessarily, have secrets. ... Essentially, two risks make understanding the scope of a secret critical for enterprise security. First is that misconfigured or over-privileged secrets can inadvertently grant access to sensitive data or critical systems, significantly increasing the attack surface. Imagine accidentally giving write privileges to a system that can access your customer's PII. That is a ticking clock waiting for a threat actor to find and exploit it.


Daily Tech Digest - February 07, 2025


Quote for the day:

"Doing what you love is the cornerstone of having abundance in your life." -- Wayne Dyer


Data, creativity, AI, and marketing: where do we go from here?

While causes of inefficient data coordination vary, silos remain the most frequent offender. There is still a widespread tendency to collect and store data in isolated buckets that are often made all the more challenging by lingering reliance on manual processing — as underscored by the fact four in ten cross-industry employees cite structuring, preparing and manipulating information among their top data difficulties. Therefore, a sizable number of organizations are working with fragmented and inconsistent data that requires time-consuming wrangling and is often subject to human error. The obvious problem this poses is a lack of the comprehensive data to inform sound decisions. At the AI-assisted marketing level, faulty data has a high potential to jeopardise creative efforts; resulting in irrelevant ads that miss their mark for target audiences and brand goals and misguided strategic moves based on skewed analysis. Of course, there are no quick fixes to tackle these complications. But businesses can reach greater data maturity and efficacy by reconfiguring their orchestration methods. With a streamlined system that persistently delivers consolidated data, marketers will be equipped to extract key performance and consumer insights that steer refined and precise AI-enhanced activity.


How AI is transforming strategy development

Beyond these well-understood risks, gen AI presents five additional considerations for strategists. First, it elevates the importance of access to proprietary data. Gen AI is accelerating a long-term trend: the democratization of insights. It has never been easier to leverage off-the-shelf tools to rapidly generate insights that are the building blocks of any strategy. As the adoption of AI models spreads, so do the consequences of relying on commoditized insights. After all, companies that use generic inputs will produce generic outputs, which lead to generic strategies that, almost by definition, lead to generic performance or worse. As a result, the importance of curating proprietary data ecosystems (more on these below) that incorporate quantitative and qualitative inputs will only increase. Second, the proliferation of data and insights elevates the importance of separating signal from noise. This has long been a challenge, but gen AI compounds it. We believe that as the technology matures, it will be able to effectively pull out the signals that matter, but it is not there yet. Third, as the ease of insight generation grows, so does the value of executive-level synthesis. Business leaders—particularly those charged with making strategic decisions—cannot operate effectively if they are buried in data, even if that data is nothing but signal. 


Why Cybersecurity Is Everyone’s Responsibility

Ultimately, cybersecurity is everyone’s responsibility because the fallout affects us all when something goes wrong. When a company goes through a data breach – say it’s ransomware – a number of people are held to task, and even more are impacted. First, the CEO and CISO will rightly be held accountable. Next, security managers will bear their share of the blame and be scrutinized for how they handled the situation. Then, laws and lawmakers will be audited to see if the proper rules were in place. The organization will be investigated for compliance violations, and if found guilty, will pay regulatory fines, legal costs, and maybe lose professional licenses. If the company cannot recover from the reputational damage, revenue will be lost, and jobs will be cut. Lastly, and most importantly, the users who lost their data can likely be impacted for years, even a lifetime. Bank accounts and credit cards will need to be changed, identity theft will be a pressing risk, and in the case of healthcare data breaches, sensitive, unchangeable information could be leaked or used as blackmail against the victims. ... The burden of cybersecurity rests with us all. There is an old saying attributed to Dale Carnegie: “Here lies the body of William Jay, who died maintaining his right of way— He was right, dead right, as he sped along, but he’s just as dead as if he were wrong.”


Spy vs spy: Security agencies help secure the network edge

“Products designed with Secure by Design principles prioritize the security of customers as a core business requirement, rather than merely treating it as a technical feature,” the introductory web page said. “During the design phase of a product’s development lifecycle, companies should implement Secure by Design principles to significantly decrease the number of exploitable flaws before introducing them to the market for widespread use or consumption. Out-of-the-box, products should be secure with additional security features such as multi-factor authentication (MFA), logging, and single sign-on (SSO) available at no extra cost.” ... However, she doesn’t feel that lumping together internet connected firewalls, routers, IoT devices, and OT systems in an advisory is helpful to the community, and “neither is calling them ‘edge devices,’ because it assumes that enterprise IT is the center of the universe and the ‘edge’ is out there.” “That may be true for firewalls, routers, and VPN gateways, but not for OT systems,” she continued. ... Many are internet connected to support remote operations and maintenance, she noted, so “the goal there should be to give advice on how to remote into those systems securely, and the tone of the advisories should be targeted to the production realities where IT security tools and processes are not always a good idea.”


Will the end of Windows 10 accelerate CIO interest in AI PCs?

“The vision around AI PCs is that, over time, more of the models, starting with small language models, and then quantized large language models … more of those workloads will happen locally, faster, with lower latency, and you won’t need to be connected to the internet and it should be less expensive,” the IDC analyst adds. “You’ll pay a bit more for an AI PC but [the AI workload is] not on the cloud and then arguably there’s more profit and it’s more secure.” ... “It’s smart for CIOs to consider some early deployments of these to bring the AI closer to the employees and processes,” Melby says. “A side benefit is that it keeps the compute local and reduces cyber risk to a degree. But it takes a strategic view and precision targeting. The costs of AI PCs/laptops are at a premium right now, so we really need a compelling business case, and the potential for reduced cloud costs could help break loose those kinds of justifications.” Not all IT leaders are on board with running AI on PCs and laptops. “Unfortunately, there are many downsides to this approach, including being locked into the solution, upgrades becoming more difficult, and not being able to benefit from any incremental improvements,” says Tony Marron, managing director of Liberty IT at Liberty Mutual.


Self-sovereign identity could transform fraud prevention, but…

Despite these challenges, SSI has the potential to be a powerful tool in the fight against fraud. Consider the growing use of mobile driver’s licenses (mDLs). These digital credentials allow users to prove their identity quickly and securely without exposing unnecessary personal information. Unlike traditional forms of identification, which often reveal more data than needed, SSI-based credentials operate on the principle of minimal disclosure, only sharing the required details. This limits the amount of exploitable information in circulation and reduces identity theft risk. Another promising area is passwordless authentication. For years, we’ve talked about the death of the password, yet reliance on weak, easily compromised credentials persists. SSI could accelerate the transition to more secure authentication mechanisms, using biometrics and cryptographic certificates instead of passwords. By eliminating centralized repositories of login credentials, businesses can significantly reduce the risk of credential-stuffing attacks and phishing attempts. However, the likelihood of a fully realized SSI wallet that consolidates identity documents, payment credentials and other sensitive information remains low, at least in the near future. The convenience factor isn’t there yet, and without significant consumer demand, businesses have little motivation to push for mass adoption.


The Staging Bottleneck: Microservices Testing in FinTech

Two common scaling strategies exist: mocking dependencies, which sacrifices fidelity and risks failures in critical integrations, or duplicating staging environments, which is costly and complex due to compliance needs. Teams often resort to shared environments, causing bottlenecks, interference and missed bugs — slowing development and increasing QA overhead. ... By multiplexing the baseline staging setup, sandboxes provide tailored environments for individual engineers or QA teams without adding compliance risks or increasing maintenance burdens, as they inherit the same compliance and configuration frameworks as production. These environments allow teams to work independently while maintaining fidelity to production conditions. Sandboxes integrate seamlessly with external APIs and dependencies, replicating real-world scenarios such as rate limits, timeouts and edge cases. This enables robust testing of workflows and edge cases while preserving isolation to avoid disruptions across teams or systems. ... By adopting sandboxes, FinTech organizations can enable high-quality, efficient development cycles, ensuring compliance while unlocking innovation at scale. This paradigm shift away from monolithic staging environments toward dynamic, scalable sandboxes gives FinTech companies a critical competitive advantage.


From Code to Culture: Adapting Workplaces to the AI Era

As AI renovates industries, it also exposes a critical gap in workforce readiness. The skills required to excel in an AI-driven world are evolving rapidly, and many employees find their current capabilities misaligned with these new demands. In this context, reskilling is not just a response to technological disruption; it is a strategic necessity for ensuring long-term organisational resilience. Today’s workforce is broadening its skillset at an unprecedented pace. Professionals are acquiring 40% more diverse skills compared to five years ago, reflecting the growing need to adapt to the complexities of AI-integrated workplaces. AI literacy has emerged as a crucial area of focus, encompassing abilities like prompt engineering and proficiency with tools. ... Beyond its operational benefits, AI is reimagining innovation and strategic decision-making in a volatile business environment characterised by economic uncertainty and rapid technological shifts. However, organisations must tread carefully. AI is not a panacea, and its effectiveness depends on thoughtful implementation. Ethical considerations like data privacy, algorithmic bias, and the potential for job displacement must be addressed to ensure that AI augments rather than undermines human potential. Transparent communication about AI’s role in the workplace can foster trust and help employees understand its benefits.


CIOs and CISOs grapple with DORA: Key challenges, compliance complexities

“As often happens with such ambitious regulations, the path to compliance is particularly complex,” says Giuseppe Ridulfo, deputy head of the organization department and head of IS at Banca Etica. “This is especially true for smaller entities, such as Banca Etica, which find themselves having to face significant structural challenges. DORA, although having shared objectives, lacks a principle of proportionality that takes into account the differences between large institutions and smaller banks.” This is compounded for smaller organizations due to the prevalence of outsourcing for these firms, Ridulfo explains. “This operating model, which allows access to advanced technologies and skills, clashes with the stringent requirements of the regulation, in particular those that impose rigorous control over third-party suppliers and complex management of contracts relating to essential or important functions,” he says. ... The complexity of DORA, therefore, is not in the text itself, although substantial, but in the work it entails for compliance. As Davide Baldini, lawyer and partner of the ICT Legal Consulting firm, points out, “DORA is a very clear law, as it is a regulation, which is applied equally in all EU countries and contains very detailed provisions. 


True Data Freedom Starts with Data Integrity

Data integrity is essential to ensuring business continuity, and the movement of data poses a significant risk. A lack of pre-migration testing is the main cause of issues such as data corruption and data loss during the movement of data. These issues lead to unexpected downtime, reputational damage, and loss of essential information. As seen by this year’s global incident, one fault, no matter how small, can result in a significant negative impact on the business and its stakeholders. This incident sends a clear message – testing before implementation is essential. Without proper testing, organizations cannot identify potential issues and implement corrective measures. ... This includes testing for both functionality, or how well the system operates after migration, and economics, the cost-effectiveness of the system or application. Functionality testing ensures a system continues to meet expectations. Economics testing involves examining resource consumption, service costs and overall scalability to ascertain whether the solution is economically sustainable for the business. This is particularly important with cloud-based migrations. While organizations can manually conduct these audits, tools on the market can also help can conduct regular automated data integrity audits. 


Daily Tech Digest - February 06, 2025


Quote for the day:

"Success is liking yourself, liking what you do, and liking how you do it." -- Maya Angelou


Here’s How Standardization Can Fix the Identity Security Problem

Fragmentation in identity security doesn’t only waste resources, it also leaves businesses exposed to threat actors, leading to potential reputational and financial damage if systems are compromised. Misconfigurations often arise when teams are pressured to deliver quickly without adequate frameworks. Fragmentation also forces teams to juggle mismatched tools, creating gaps in oversight. These gaps become weak points for attackers, leading to cascading failures. ... Standardization transforms the complexity of identity management into a straightforward, structured process. Instead of piecing together bespoke solutions, leveraging established frameworks can deliver robust, scalable and future-proof security. ... Developers often need to weigh short-term challenges against long-term gains. Adopting standardized identity frameworks is one decision where the long-term benefits are clear. Increased efficiency, security and scalability contribute to a more sustainable development process. Standardization equips us with ready-to-use solutions for essential features, freeing us to focus on innovation. It also enables applications to meet compliance requirements without added strain on teams. By investing in frameworks like IPSIE, we can future-proof our systems while reducing the burden on individual developers.


How Data Contracts Support Collaboration between Data Teams

Data contracts are what APIs are for software systems, Christ said. They are an interface specification between a data provider and their data consumers. Data contracts specify the provided data model with the syntax, format, and semantics, but also contain data quality guarantees, service-level objectives, and terms and conditions for using the data, Christ mentioned. They also define the owner of the provided data product that is responsible if there are any questions or issues, he added. Data mesh is an important driver for data contracts, as data mesh introduces distributed ownership of data products, Christ said. Before that, we usually had just one central team that was responsible for all data and BI activities, with no need to specify interfaces with other teams. ... Data providers benefit by gaining visibility into which consumers are accessing their data. Permissions can be automated accordingly, and when changes need to be implemented in a data product, a new version of the data contract can be introduced and communicated with the consumers, Christ said. With data contracts, we have very high-quality metadata, Christ said. This metadata can be further leveraged to optimize governance processes or build an enterprise data marketplace, enabling better discoverability, transparency, and automated access management across the organization to make data available for more teams.


How Agentic AI will be Weaponized for Social Engineering Attacks

To combat advanced social engineering attacks, consider building or acquiring an AI agent that can assess changes to the attack surface, detect irregular activities indicating malicious actions, analyze global feeds to detect threats early, monitor deviations in user behavior to spot insider threats, and prioritize patching based on vulnerability trends. ... Security awareness training is a non-negotiable component to bolstering human defenses. Organizations must go beyond traditional security training and leverage tools that can do things like assign engaging content to users based on risk scores and failure rates, dynamically generate quizzes and social engineering scenarios based on the latest threats, trigger bite-sized refreshers, etc. ... Human intuition and vigilance are critical in combating social engineering threats. Organizations must double down on fostering a culture of cybersecurity, educating employees on the risks of social engineering and the impact on the organization, training to identify and report such threats, and empowering them with tools that can improve security behavior. Gartner predicts that by 2028, a third of our interactions with AI will shift from simply typing commands to fully engaging with autonomous agents that can act on their own goals and intentions. Obviously, cybercriminals won’t be far behind in exploiting these advancements for their misdeeds.

As businesses expand their cloud services and integrate AI, IoT, and other digital tools, the attack surface grows exponentially. Cybercriminals are exploiting this vast surface with increasingly sophisticated tactics, including AI-driven attacks that can bypass traditional security measures. Lack of visibility across multicloud environments: Many businesses rely on a combination of private, public, and hybrid cloud solutions, which can create visibility gaps. Security teams struggle to manage and monitor resources across various platforms, making it difficult to detect vulnerabilities or respond to threats in real time. Misconfigurations and human error: Cloud misconfigurations remain one of the leading causes of data breaches. ... Ongoing risk assessments are essential for identifying vulnerabilities and understanding the potential attack vectors in cloud environments. Regular penetration testing can help organisations identify and patch security gaps proactively. These assessments, combined with continuous monitoring, ensure the security posture evolves alongside emerging threats. Centralised threat detection and response: Implementing a centralised security platform that aggregates data from multiple cloud environments can streamline threat detection and response. By correlating network events with cloud activities, security teams can gain deeper insights into potential risks and reduce the mean time to resolution (MTTR) for incidents.


Is 2025 the year of quantum computing?

As quantum computing research gradually inches toward real-world usability, you might wonder where we’ll see the impacts of this technology, both short- and long-term. One of the most immediately important areas is cryptography. Since a quantum computer can take on many states simultaneously, something like factoring large numbers can proceed in parallel, relying on the superposition of particle states to explore many possible outcomes at once. There is also a tantalizing potential for cross-over between machine learning and quantum computing. Here, the probabilistic nature of neural networks and AI in general seems to lend itself to being modeled at a more fundamental level, and with far greater efficiency, using the hardware capabilities of quantum computers. How powerful would an AI system be if it rested on quantum hardware? Another area is the development of alternative energy sources, including fusion. Using matter itself to model reality opens up possibilities we can’t yet fully predict. Drug discovery and material design are also areas of interest for quantum calculations. At the hardware level, quantum systems allow us to use matter itself to model the complexity of designing useful matter. These and other exciting developments, especially in error correction, seem to indicate quantum computing’s time is finally coming. 


The overlooked risks of poor data hygiene in AI-driven organizations

A significant risk posed by AI-enabled apps is called ‘AI oversharing,’ where enterprise applications expose sensitive information through poorly defined access controls. This is especially prevalent in retrieval-augmented generation (RAG) applications when original source permissions aren’t honoured throughout the system. Imagine for a minute if you were an enterprise with millions of documents that contain decades of enterprise knowledge and you wanted to leverage AI through a RAG-based architecture. A typical approach is to load all of those documents into a vector database. If you exposed that data through an AI chatbot without honouring the original permissions on those documents, then anyone issuing a prompt could access any of that data. ... Organizations need to implement a methodical process for assessing and preparing data for AI applications, as sophisticated attacks like prompt injection and unauthorized data access become more prevalent. Begin with a thorough inventory of your data stores, including file and documents stores, support and ticketing system, and any other data sources that you’ll source your enterprise data from. Then work to understand its potential use in AI applications and identify critical gaps or inconsistencies. 


Who Is Attacking Smart Factories? Understanding the Evolving Threat Landscape

Cybercriminals no longer rely on broad, generalized attacks but have begun to tailor their malware specifically for OT systems. For example, they know which files on engineering workstations or MES systems are most important for production and will specifically target them for encryption. This shift has also seen an increase in multi-vector attacks. Attackers might gain initial access through phishing emails but, once inside, use tools that enable them to move seamlessly between IT and OT networks. The goal is no longer just to hold data hostage but to encrypt or destroy files that are crucial to the manufacturing process. With this targeted approach, attackers increase the likelihood that companies will pay the ransom, especially when systems critical to production are held hostage. ... The increasing sophistication of these attacks highlights the need for manufacturers to adopt a holistic approach to cybersecurity. While technical countermeasures like firewalls, endpoint security, and intrusion detection systems are important, they are not enough on their own. A comprehensive security strategy must address both IT and OT environments and recognize the interdependence between these systems. Manufacturers should focus on risk assessment across their entire value chain, from the factory floor to the supply chain and customer-facing systems. 


Legislators demand truth about OPM email server

Erik Avakian, security counselor at Info-Tech Research Group said the “recent development regarding OPM and the alleged issues regarding an email server being deployed on the agency network and emails being distributed by the agency to federal employees raise potential security and privacy concerns that, if substantiated, could be out of sync with well-defined cybersecurity best practices and privacy regulations.” Most important, he said, would be the way in which the system had been deployed onto the federal network, “particularly in light of the many existing US federal government-required processes, procedures, and checks a system would need to undergo before receiving green light approval for such a fast-tracked deployment. There could be fast-track processes in place for such instances.” However, even in such cases, said Avakian, “any deployment of systems or tools would certainly, as best practice, need to be reviewed for security vulnerabilities, and its architecture checked and hardened, at a minimum, to be aligned with the federal security requirements for systems deployed on the network prior to going live.” The question would be whether the processes were followed, he said. “In any case, there could be quite a checklist of issues regarding Compliance with Cybersecurity Frameworks, Best Practices, and the Federal Government’s Memo regarding the Implementation of Zero Trust, to name a few, as well as numerous privacy laws.”


Open-Source AI: Power Shift or Pandora's Box?

"This is no longer just a technological race, it’s a geopolitical one. While open-source models offer accessibility, their full training pipeline and datasets often remain undisclosed. Nations are using AI to influence global markets, trade policies and digital sovereignty," said Amitkumar Shrivastava, global distinguished engineer and head of AI at Fujitsu Consulting India. "The real winners will be those who balance innovation with regulatory foresight and ethical AI practices." While open-source AI fosters innovation, it also raises concerns about security, compliance and ethical risks. Increased accessibility introduces challenges such as misinformation, deepfake generation and unauthorized automation. "DeepSeek is open-source, which is very important, as it allows users to download the models and run them on their own hardware if they have the capacity. We are already seeing others create local installations of DeepSeek models even without GPUs," Professor Balaraman Ravindran, IIT Madras, wrote in his blog. "Assuming that DeepSeek's claims on infrastructure reductions are true, some researchers are still not fully convinced and are in the process of verifying the claims. There will be an immediate breakdown of the monopolistic hold of a few technology giants with deep pockets to control the AI market - much like India developing cheap Corona vaccine," said Dr. Sanjeev Kumar.


The Cost of AI Security

The cost of AI and its security needs is going to be an ongoing conversation for enterprise leaders. “It’s still so early in the cycle that most security organizations are trying to get their arms around what they need to protect, what’s actually different. What do [they] already have in place that can be leveraged?” says Saeedi. Who is a part of these evolving conversations? CISOs, naturally, have a leading role in defining the security controls applied to an enterprise’s AI tools, but given the growing ubiquity of AI a multistakeholder approach is necessary. Other C-suite leaders, the legal team, and the compliance team often have a voice. Saeedi is seeing cross-functional committees forming to assess AI risks, implementation, governance, and budgeting. As these teams within enterprises begin to wrap their heads around various AI security costs, the conversation needs to include AI vendors. “The really key part for any security or IT organization, when [we’re] talking with the vendor is to understand, ‘We’re going to use your AI platform but what are you going to do with our data?’” Is that vendor going to use an enterprise’s data for model training? How is that enterprise’s data secured? How does an AI vendor address the potential security risks associated with the implementation of its tool?

Daily Tech Digest - February 05, 2025


Quote for the day:

"You may only succeed if you desire succeeding; you may only fail if you do not mind failing." --Philippos


Neural Networks – Intuitively and Exhaustively Explained

The process of thinking within the human brain is the result of communication between neurons. You might receive stimulus in the form of something you saw, then that information is propagated to neurons in the brain via electrochemical signals. The first neurons in the brain receive that stimulus, then each neuron may choose whether or not to "fire" based on how much stimulus it received. "Firing", in this case, is a neurons decision to send signals to the neurons it’s connected to. ... Neural networks are, essentially, a mathematically convenient and simplified version of neurons within the brain. A neural network is made up of elements called "perceptrons", which are directly inspired by neurons. ... In AI there are many popular activation functions, but the industry has largely converged on three popular ones: ReLU, Sigmoid, and Softmax, which are used in a variety of different applications. Out of all of them, ReLU is the most common due to its simplicity and ability to generalize to mimic almost any other function. ... One of the fundamental ideas of AI is that you can "train" a model. This is done by asking a neural network (which starts its life as a big pile of random data) to do some task. Then, you somehow update the model based on how the model’s output compares to a known good answer.


Why honeypots deserve a spot in your cybersecurity arsenal

In addition to providing critical threat intelligence for defenders, honeypots can often serve as helpful deception techniques to ensure attackers focus on decoys instead of valuable and critical organizational data and systems. Once malicious activity is identified, defenders can use the findings from the honeypots to look for indicators of compromise (IoC) in other areas of their systems and environments, potentially catching further malicious activity and minimizing the dwell time of attackers. In addition to threat intelligence and attack detection value, honeytokens often have the benefit of having minimal false positives, given they are highly customized decoy resources deployed with the intent of not being accessed. This contrasts with broader security tooling, which often suffers from high rates of false positives from low-fidelity alerts and findings that burden security teams and developers. ... Enterprises need to put some thought into the placement of the honeypots. It is common for them to be used in environments and systems that may be potentially easier for attackers to access, such as publicly exposed endpoints and systems that are internet accessible, as well as internal network environments and systems. The former, of course, is likely to get more interaction and provide broader generic insights. 


IoT Technology: Emerging Trends Impacting Industry And Consumers

An emerging IoT trend is the rise of emotion-aware devices that use sensors and artificial intelligence to detect human emotions through voice, facial expressions or physiological data. For businesses, this opens doors to hyper-personalized customer experiences in industries like retail and healthcare. For consumers, it means more empathetic tech—think stress-relieving smart homes or wearables that detect and respond to anxiety. ... The increasing prevalence of IoT tech means that it is being increasingly deployed into “less connected” environments. As a result, the user experience needs to be adapted so that it’s not wholly dependent on good connectivity—instead, priorities must include how to gracefully handle data gaps and robust fallbacks with missing control instructions. ... IoT systems can now learn user preferences, optimizing everything from home automation to healthcare. For businesses, this means deeper customer engagement and loyalty; for consumers, it translates to more intuitive, seamless interactions that enhance daily life. ... While not a newly emerging trend, the Industrial Internet of Things is an area of focus for manufacturers seeking greater efficiency, productivity and safety. Connecting machines and systems with a centralized work management platform gives manufacturers access to real-time data. 


When digital literacy fails, IT gets the blame

By insisting that requisite digital skills and system education are mastered before a system cutover occurs, the CIO assumes a leadership role in the educational portion of each digital project, even though IT itself may not be doing the training. Where IT should be inserting itself is in the area of system skills training and testing before the system goes live. The dual goals of a successful digital project should be two-fold: a system that’s complete and ready to use; and a workforce that’s skilled and ready to use it. ... IT business analysts, help desk personnel, IT trainers, and technical support personnel all have people-helping and support skills that can contribute to digital education efforts throughout the company. The more support that users have, the more confidence they will gain in new digital systems and business processes — and the more successful the company’s digital initiatives will be. ... Eventually, most of the technical glitches were resolved, and doctors, patients, and support medical personnel learned how to integrate virtual visits with regular physical visits and with the medical record system. By the time the pandemic hit in 2019, telehealth visits were already well under way. These visits worked because the IT was there, the pandemic created an emergency scenario, and, most importantly, doctors, patients, and medical support personnel were already trained on using these systems to best advantage.


What you need to know about developing AI agents

“The success of AI agents requires a foundational platform to handle data integration, effective process automation, and unstructured data management,” says Rich Waldron, co-founder and CEO of Tray.ai. “AI agents can be architected to align with strict data policies and security protocols, which makes them effective for IT teams to drive productivity gains while ensuring compliance.” ... One option for AI agent development comes directly as a service from platform vendors that use your data to enable agent analysis, then provide the APIs to perform transactions. A second option is from low-code or no-code, automation, and data fabric platforms that can offer general-purpose tools for agent development. “A mix of low-code and pro-code tools will be used to build agents, but low-code will dominate since business analysts will be empowered to build their own solutions,” says David Brooks, SVP of Evangelism at Copado. “This will benefit the business through rapid iteration of agents that address critical business needs. Pro coders will use AI agents to build services and integrations that provide agency.” ... Organizations looking to be early adopters in developing AI agents will likely need to review their data management platforms, development tools, and smarter devops processes to enable developing and deploying agents at scale.


The Path of Least Resistance to Privileged Access Management

While PAM allows organizations to segment accounts, providing a barrier between the user’s standard access and needed privileged access and restricting access to information that is not needed, it also adds a layer of internal and organizational complexity. This is because of the impression it removes user’s access to files and accounts that they have typically had the right to use, and they do not always understand why. It can bring changes to their established processes. They don’t see the security benefit and often resist the approach, seeing it as an obstacle to doing their jobs and causing frustration amongst teams. As such, PAM is perceived to be difficult to introduce because of this friction. ... A significant gap in the PAM implementation process lies in the lack of comprehensive awareness among administrators. They often do not have a complete inventory of all accounts, the associated access levels, their purposes, ownership, or the extent of the security issues they face. ... Consider a scenario where a company has a privileged Windows account with access to 100 servers. If PAM is instructed to discover the scope of this Windows account, it might only identify the servers that have been accessed previously by the account, without revealing the full extent of its access or the actions performed.


Quantum networking advances on Earth and in space

“The most established use case of quantum networking to date is quantum key distribution — QKD — a technology first commercialized around 2003,” says Monga. “Since then, substantial advancements have been achieved globally in the development and production deployment of QKD, which leverages secure quantum channels to exchange encryption keys, ensuring data transfer security over conventional networks.” Quantum key distribution networks are already up and running, and are being used by companies, he says, in the U.S., in Europe, and in China. “Many commercial companies and startups now offer QKD products, providing secure quantum channels for the exchange of encryption keys, which ensures the safe transfer of data over traditional networks,” he says. Companies offering QKD include Toshiba, ID Quantique, LuxQuanta, HEQA Security, Think Quantum, and others. One enterprise already using a quantum network to secure communications is JPMorgan Chase, which is connecting two data centers with a high-speed quantum network over fiber. It also has a third quantum node set up to test next-generation quantum technologies. Meanwhile, the need for secure quantum networks is higher than ever, as quantum computers get closer to prime time.


What are the Key Challenges in Mobile App Testing?

One of the major issues in mobile app testing is the sheer variety of devices in the market. With numerous models, each having different screen sizes, pixel densities, operating system (OS) versions and hardware specifications, ensuring the app is responsive across all devices becomes a task. Testing for compatibility on every device and OS can be tiresome and expensive. While tools like emulators and cloud-based testing platforms can help, it remains essential to conduct tests on real devices to ensure accurate results. ... In addition to device fragmentation, another key challenge is the wide range of OS versions. A device may run one version of an OS while another runs on a different version, leading to inconsistencies in app performance. Just like any other software, mobile apps need to function seamlessly across multiple OS versions, including Android, iPhone Operating System (iOS) and other platforms. Furthermore, OS are updated frequently, which can cause apps to break or not function. ... Mobile app users interact with apps under various network conditions, including Wi-Fi, 4G, 5G or limited connectivity. Testing how an app performs in different network conditions is crucial to ensure it does not hang or load slowly when the connection is weak. 


Reimagining KYC to Meet Regulatory Scrutiny

Implementing AI and ML allows KYC to run in the background rather than having staff manually review information as they can, said Jennifer Pitt, senior analyst for fraud and cybersecurity with Javelin Strategy & Research. “This allows the KYC team to shift to other business areas that require more human interaction like investigations,” Pitt said. Yet use of AI and ML remains low at many banks. Currently, fraudsters and cybercriminals are using generative adversarial networks - machine learning models that create new data that mirrors a training set - to make fraud less detectable. Fraud professionals should leverage generative adversarial networks to create large datasets that closely mirror actual fraudulent behavior. This process involves using a generator to create synthetic transaction data and a discriminator to distinguish between real and synthetic data. By training these models iteratively, the generator improves its ability to produce realistic fraudulent transactions, allowing fraud professionals to simulate emerging fraud types and account takeovers, and enhance detection models’ sensitivity to these evolving threats. Instead of waiting to gather sufficient historical data from known fraudulent behaviors, GANs enable a more proactive approach, helping fraud teams quickly understand new fraud trends and patterns, Pitt said.


How Agentic AI Will Transform Banking (and Banks)

Agentic AI has two intertwined vectors. For banks, one path is internal, and focused on operational efficiency for tasks including the automation of routine data entry and compliance and regulatory checks, summaries of email and reports, and the construction of predictive models for trading and risk management to bolster insights into market dynamics, fraud and credit and liquidity risk. The other path is consumer facing, and revolves around managing customer relationships, from automated help desks staffed by chatbots to personalized investment portfolio recommendations. Both trajectories aim to improve efficiency and reduce costs. Agentic AI "could have a bigger impact on the economy and finance than the internet era," Citigroup wrote in a January 2025 report that calls the technology the "Do It For Me" Economy. ... Meanwhile, automated AI decisions could inadvertently violate laws and regulations on consumer protection, anti-money laundering or fair lending laws. Agentic AI that can instruct an agent to make a trade based on bad data or assumptions could lead to financial losses and create systemic risk within the banking system. "Human oversight is still needed to oversee inputs and review the decisioning process," Davis says. 

Daily Tech Digest - February 04, 2025


Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie


Technology skills gap plagues industries, and upskilling is a moving target

“The deepening threat landscape and rapidly evolving high-momentum technologies like AI are forcing organizations to move with lightning speed to fill specific gaps in their job architectures, and too often they are stumbling,” said David Foote, chief analyst at consultancy Foote Partners. To keep up with the rapidly changing landscape, Gartner suggests that organizations invest in agile learning for tech teams. “In the context of today’s AI-fueled accelerated disruption, many business leaders feel learning is too slow to respond to the volume, variety and velocity of skills needs,” said Chantal Steen, a senior director in Gartner’s HR practice. “Learning and development must become more agile to respond to changes faster and deliver learning more rapidly and more cost effectively.” Studies from staffing firm ManpowerGroup, hiring platform Indeed, and Deloitte consulting show that tech hiring will focus on candidates with flexible skills to meet evolving demands. “Employers know a skilled and adaptable workforce is key to navigating transformation, and many are prioritizing hiring and retaining people with in-demand flexible skills that can flex to where demand sits,” said Jonas Prising, ManpowerGroup chair and CEO.


Mixture of Experts (MoE) Architecture: A Deep Dive & Comparison of Top Open-Source Offerings

The application of MoE to open-source LLMs offers several key advantages. Firstly, it enables the creation of more powerful and sophisticated models without incurring the prohibitive costs associated with training and deploying massive, single-model architectures. Secondly, MoE facilitates the development of more specialized and efficient LLMs, tailored to specific tasks and domains. This specialization can lead to significant improvements in performance, accuracy, and efficiency across a wide range of applications, from natural language translation and code generation to personalized education and healthcare. The open-source nature of MoE-based LLMs promotes collaboration and innovation within the AI community. By making these models accessible to researchers, developers, and businesses, MoE fosters a vibrant ecosystem of experimentation, customization, and shared learning. ... Integrating MoE architecture into open-source LLMs represents a significant step forward in the evolution of artificial intelligence. By combining the power of specialization with the benefits of open-source collaboration, MoE unlocks new possibilities for creating more efficient, powerful, and accessible AI models that can revolutionize various aspects of our lives.


The DeepSeek Disruption and What It Means for CIOs

The emergence of DeepSeek has also revived a long-standing debate about open-source AI versus proprietary AI. Open-source AI is not a silver bullet. CIOs need to address critical risks as open-source AI models, if not secured properly, can be exposed to grave cyberthreats and adversarial attacks. While DeepSeek currently shows extraordinary efficiency, it requires an internal infrastructure, unlike GPT-4, which can seamlessly scale on OpenAI's cloud. Open-source AI models lack support and skills, thereby mandating users to build their own expertise, which could be demanding. "What happened with DeepSeek is actually super bullish. I look at this transition as an opportunity rather than a threat," said Steve Cohen, founder of Point72. ... The regulatory non-compliance adds another challenge as many governments restrict and disallow sensitive enterprise data from being processed by Chinese technologies. A possibility of potential backdoor can't be ruled out and this could open the enterprises to additional risks. CIOs need to conduct extensive security audits before deploying DeepSeek. rganizations can implement safeguards such as on-premises deployment to avoid data exposure. Integrating strict encryption protocols can help the AI interactions remain confidential, and performing rigorous security audits ensure the model's safety before deploying it into business workflows.


Why GreenOps will succeed where FinOps is failing

The cost-control focus fails to engage architects and engineers in rethinking how systems are designed, built and operated for greater efficiency. This lack of engagement results in inertia and minimal progress. For example, the database team we worked with in an organization new to the cloud launched all the AWS RDS database servers from dev through production, incurring a $600K a month cloud bill nine months before the scheduled production launch. The overburdened team was not thinking about optimizing costs, but rather optimizing their own time and getting out of the way of the migration team as quickly as possible. ... GreenOps — formed by merging FinOps, sustainability and DevOps — addresses the limitations of FinOps while integrating sustainability as a core principle. Green computing contributes to GreenOps by emphasizing energy-efficient design, resource optimization and the use of sustainable technologies and platforms. This foundational focus ensures that every system built under GreenOps principles is not only cost-effective but also minimizes its environmental footprint, aligning technological innovation with ecological responsibility. Moreover, we’ve found that providing emissions feedback to architects and engineers is a bigger motivator than cost to inspire them to design more efficient systems and build automation to shut down underutilized resources.


Best Practices for API Rate Limits and Quotas

Unlike short-term rate limits, the goal of quotas is to enforce business terms such as monetizing your APIs and protecting your business from high-cost overruns by customers. They measure customer utilization of your API over longer durations, such as per hour, per day, or per month. Quotas are not designed to prevent a spike from overwhelming your API. Rather, quotas regulate your API’s resources by ensuring a customer stays within their agreed contract terms. ... Even a protection mechanism like rate limiting could have errors. For example, a bad network connection with Redis could cause reading rate limit counters to fail. In such scenarios, it’s important not to artificially reject all requests or lock out users even though your Redis cluster is inaccessible. Your rate-limiting implementation should fail open rather than fail closed, meaning all requests are allowed even though the rate limit implementation is faulting. This also means rate limiting is not a workaround to poor capacity planning, as you should still have sufficient capacity to handle these requests or even design your system to scale accordingly to handle a large influx of new requests. This can be done through auto-scale, timeouts, and automatic trips that enable your API to still function.


Protecting Ultra-Sensitive Health Data: The Challenges

Protecting ultra-sensitive information "is an incredibly confusing and complicated and evolving part of the law," said regulatory attorney Kirk Nahra of the law firm WilmerHale. "HIPAA generally does not distinguish between categories of health information," he said. "There are exceptions - including the recent Dobbs rule - but these are not fundamental in their application, he said. Privacy protections related to abortion procedures are perhaps the most hotly debated type of patient information. For instance, last June - in response to the June 2022 Supreme Court's Dobbs ruling, which overturned the national right to abortion - the Biden administration's U.S. Department of Health and Human Services modified the HIPAA Privacy Rule to add additional safeguards for the access, use and disclosure of reproductive health information. The rule is aimed at protecting women from the use or disclosure of their reproductive health information when it is sought to investigate or impose liability on individuals, healthcare providers or others who seek, obtain, provide or facilitate reproductive healthcare that is lawful under the circumstances in which such healthcare is provided. But that rule is being challenged in federal court by 15 state attorneys general seeking to revoke the regulations.


Evolving threat landscape, rethinking cyber defense, and AI: Opportunties and risk

Businesses are firmly in attackers’ crosshairs. Financially motivated cybercriminals conduct ransomware attacks with record-breaking ransoms being paid by companies seeking to avoid business interruption. Others, including nation-state hackers, infiltrate companies to steal intellectual property and trade secrets to gain commercial advantage over competitors. Further, we regularly see critical infrastructure being targeted by nation-state cyberattacks designed to act as sleeper cells that can be activated in times of heightened tension. Companies are on the back foot. ... As zero trust disrupts obsolete firewall and VPN-based security, legacy vendors are deploying firewalls and VPNs as virtual machines in the cloud and calling it zero trust architecture. This is akin to DVD hardware vendors deploying DVD players in a data center and calling it Netflix! It gives a false sense of security to customers. Organizations need to make sure they are really embracing zero trust architecture, which treats everyone as untrusted and ensures users connect to specific applications or services, rather than a corporate network. ... Unfortunately, the business world’s harnessing of AI for cyber defense has been slow compared to the speed of threat actors harnessing it for attacks. 


Six essential tactics data centers can follow to achieve more sustainable operations

By adjusting energy consumption based on real-time demand, data centers can significantly enhance their operational efficiency. For example, during periods of low activity, power can be conserved by reducing energy use, thus minimizing waste without compromising performance. This includes dynamic power management technologies in switch and router systems, such as shutting down unused line cards or ports and controlling fan speeds to optimize energy use based on current needs. Conversely, during peak demand, operations can be scaled up to meet increased requirements, ensuring consistent and reliable service levels. Doing so not only reduces unnecessary energy expenditure, but also contributes to sustainability efforts by lowering the environmental impact associated with energy-intensive operations. ... Heat generated from data center operations can be captured and repurposed to provide heating for nearby facilities and homes, transforming waste into a valuable resource. This approach promotes a circular energy model, where excess heat is redirected instead of discarded, reducing the environmental impact. Integrating data centers into local energy systems enhances sustainability and offers tangible benefits to surrounding areas and communities whilst addressing broader energy efficiency goals.


The Engineer’s Guide to Controlling Configuration Drift

“Preventing configuration drift is the bedrock for scalable, resilient infrastructure,” comments Mayank Bhola, CTO of LambdaTest, a cloud-based testing platform that provides instant infrastructure. “At scale, even small inconsistencies can snowball into major operational inefficiencies. We encountered these challenges [user-facing impact] as our infrastructure scaled to meet growing demands. Tackling this challenge head-on is not just about maintaining order; it’s about ensuring the very foundation of your technology is reliable. And so, by treating infrastructure as code and automating compliance, we at LambdaTest ensure every server, service, and setting aligns with our growth objectives, no matter how fast we scale. Adopting drift detection and remediation strategies is imperative for maintaining a resilient infrastructure. ... The policies you set at the infrastructure level, such as those for SSH access, add another layer of security to your infrastructure. Ansible allows you to define policies like removing root access, changing the default SSH port, and setting user command permissions. “It’s easy to see who has access and what they can execute,” Kampa remarks. “This ensures resilient infrastructure, keeping things secure and allowing you to track who did what if something goes wrong.”


Strategies for mitigating bias in AI models

The need to address bias in AI models stems from the fundamental principle of fairness. AI systems should treat all individuals equitably, regardless of their background. However, if the training data reflects existing societal biases, the model will likely reproduce and even exaggerate those biases in its outputs. For instance, if a facial recognition system is primarily trained on images of one demographic, it may exhibit lower accuracy rates for other groups, potentially leading to discriminatory outcomes. Similarly, a natural language processing model trained on predominantly Western text may struggle to understand or accurately represent nuances in other languages and cultures. ... Incorporating contextual data is essential for AI systems to provide relevant and culturally appropriate responses. Beyond basic language representation, models should be trained on datasets that capture the history, geography, and social issues of the populations they serve. For instance, an AI system designed for India should include data on local traditions, historical events, legal frameworks, and social challenges specific to the region. This ensures that AI-generated responses are not only accurate but also culturally sensitive and context-aware. Additionally, incorporating diverse media formats such as text, images, and audio from multiple sources enhances the model’s ability to recognise and adapt to varying communication styles.