Daily Tech Digest - February 06, 2025


Quote for the day:

"Success is liking yourself, liking what you do, and liking how you do it." -- Maya Angelou


Here’s How Standardization Can Fix the Identity Security Problem

Fragmentation in identity security doesn’t only waste resources, it also leaves businesses exposed to threat actors, leading to potential reputational and financial damage if systems are compromised. Misconfigurations often arise when teams are pressured to deliver quickly without adequate frameworks. Fragmentation also forces teams to juggle mismatched tools, creating gaps in oversight. These gaps become weak points for attackers, leading to cascading failures. ... Standardization transforms the complexity of identity management into a straightforward, structured process. Instead of piecing together bespoke solutions, leveraging established frameworks can deliver robust, scalable and future-proof security. ... Developers often need to weigh short-term challenges against long-term gains. Adopting standardized identity frameworks is one decision where the long-term benefits are clear. Increased efficiency, security and scalability contribute to a more sustainable development process. Standardization equips us with ready-to-use solutions for essential features, freeing us to focus on innovation. It also enables applications to meet compliance requirements without added strain on teams. By investing in frameworks like IPSIE, we can future-proof our systems while reducing the burden on individual developers.


How Data Contracts Support Collaboration between Data Teams

Data contracts are what APIs are for software systems, Christ said. They are an interface specification between a data provider and their data consumers. Data contracts specify the provided data model with the syntax, format, and semantics, but also contain data quality guarantees, service-level objectives, and terms and conditions for using the data, Christ mentioned. They also define the owner of the provided data product that is responsible if there are any questions or issues, he added. Data mesh is an important driver for data contracts, as data mesh introduces distributed ownership of data products, Christ said. Before that, we usually had just one central team that was responsible for all data and BI activities, with no need to specify interfaces with other teams. ... Data providers benefit by gaining visibility into which consumers are accessing their data. Permissions can be automated accordingly, and when changes need to be implemented in a data product, a new version of the data contract can be introduced and communicated with the consumers, Christ said. With data contracts, we have very high-quality metadata, Christ said. This metadata can be further leveraged to optimize governance processes or build an enterprise data marketplace, enabling better discoverability, transparency, and automated access management across the organization to make data available for more teams.


How Agentic AI will be Weaponized for Social Engineering Attacks

To combat advanced social engineering attacks, consider building or acquiring an AI agent that can assess changes to the attack surface, detect irregular activities indicating malicious actions, analyze global feeds to detect threats early, monitor deviations in user behavior to spot insider threats, and prioritize patching based on vulnerability trends. ... Security awareness training is a non-negotiable component to bolstering human defenses. Organizations must go beyond traditional security training and leverage tools that can do things like assign engaging content to users based on risk scores and failure rates, dynamically generate quizzes and social engineering scenarios based on the latest threats, trigger bite-sized refreshers, etc. ... Human intuition and vigilance are critical in combating social engineering threats. Organizations must double down on fostering a culture of cybersecurity, educating employees on the risks of social engineering and the impact on the organization, training to identify and report such threats, and empowering them with tools that can improve security behavior. Gartner predicts that by 2028, a third of our interactions with AI will shift from simply typing commands to fully engaging with autonomous agents that can act on their own goals and intentions. Obviously, cybercriminals won’t be far behind in exploiting these advancements for their misdeeds.

As businesses expand their cloud services and integrate AI, IoT, and other digital tools, the attack surface grows exponentially. Cybercriminals are exploiting this vast surface with increasingly sophisticated tactics, including AI-driven attacks that can bypass traditional security measures. Lack of visibility across multicloud environments: Many businesses rely on a combination of private, public, and hybrid cloud solutions, which can create visibility gaps. Security teams struggle to manage and monitor resources across various platforms, making it difficult to detect vulnerabilities or respond to threats in real time. Misconfigurations and human error: Cloud misconfigurations remain one of the leading causes of data breaches. ... Ongoing risk assessments are essential for identifying vulnerabilities and understanding the potential attack vectors in cloud environments. Regular penetration testing can help organisations identify and patch security gaps proactively. These assessments, combined with continuous monitoring, ensure the security posture evolves alongside emerging threats. Centralised threat detection and response: Implementing a centralised security platform that aggregates data from multiple cloud environments can streamline threat detection and response. By correlating network events with cloud activities, security teams can gain deeper insights into potential risks and reduce the mean time to resolution (MTTR) for incidents.


Is 2025 the year of quantum computing?

As quantum computing research gradually inches toward real-world usability, you might wonder where we’ll see the impacts of this technology, both short- and long-term. One of the most immediately important areas is cryptography. Since a quantum computer can take on many states simultaneously, something like factoring large numbers can proceed in parallel, relying on the superposition of particle states to explore many possible outcomes at once. There is also a tantalizing potential for cross-over between machine learning and quantum computing. Here, the probabilistic nature of neural networks and AI in general seems to lend itself to being modeled at a more fundamental level, and with far greater efficiency, using the hardware capabilities of quantum computers. How powerful would an AI system be if it rested on quantum hardware? Another area is the development of alternative energy sources, including fusion. Using matter itself to model reality opens up possibilities we can’t yet fully predict. Drug discovery and material design are also areas of interest for quantum calculations. At the hardware level, quantum systems allow us to use matter itself to model the complexity of designing useful matter. These and other exciting developments, especially in error correction, seem to indicate quantum computing’s time is finally coming. 


The overlooked risks of poor data hygiene in AI-driven organizations

A significant risk posed by AI-enabled apps is called ‘AI oversharing,’ where enterprise applications expose sensitive information through poorly defined access controls. This is especially prevalent in retrieval-augmented generation (RAG) applications when original source permissions aren’t honoured throughout the system. Imagine for a minute if you were an enterprise with millions of documents that contain decades of enterprise knowledge and you wanted to leverage AI through a RAG-based architecture. A typical approach is to load all of those documents into a vector database. If you exposed that data through an AI chatbot without honouring the original permissions on those documents, then anyone issuing a prompt could access any of that data. ... Organizations need to implement a methodical process for assessing and preparing data for AI applications, as sophisticated attacks like prompt injection and unauthorized data access become more prevalent. Begin with a thorough inventory of your data stores, including file and documents stores, support and ticketing system, and any other data sources that you’ll source your enterprise data from. Then work to understand its potential use in AI applications and identify critical gaps or inconsistencies. 


Who Is Attacking Smart Factories? Understanding the Evolving Threat Landscape

Cybercriminals no longer rely on broad, generalized attacks but have begun to tailor their malware specifically for OT systems. For example, they know which files on engineering workstations or MES systems are most important for production and will specifically target them for encryption. This shift has also seen an increase in multi-vector attacks. Attackers might gain initial access through phishing emails but, once inside, use tools that enable them to move seamlessly between IT and OT networks. The goal is no longer just to hold data hostage but to encrypt or destroy files that are crucial to the manufacturing process. With this targeted approach, attackers increase the likelihood that companies will pay the ransom, especially when systems critical to production are held hostage. ... The increasing sophistication of these attacks highlights the need for manufacturers to adopt a holistic approach to cybersecurity. While technical countermeasures like firewalls, endpoint security, and intrusion detection systems are important, they are not enough on their own. A comprehensive security strategy must address both IT and OT environments and recognize the interdependence between these systems. Manufacturers should focus on risk assessment across their entire value chain, from the factory floor to the supply chain and customer-facing systems. 


Legislators demand truth about OPM email server

Erik Avakian, security counselor at Info-Tech Research Group said the “recent development regarding OPM and the alleged issues regarding an email server being deployed on the agency network and emails being distributed by the agency to federal employees raise potential security and privacy concerns that, if substantiated, could be out of sync with well-defined cybersecurity best practices and privacy regulations.” Most important, he said, would be the way in which the system had been deployed onto the federal network, “particularly in light of the many existing US federal government-required processes, procedures, and checks a system would need to undergo before receiving green light approval for such a fast-tracked deployment. There could be fast-track processes in place for such instances.” However, even in such cases, said Avakian, “any deployment of systems or tools would certainly, as best practice, need to be reviewed for security vulnerabilities, and its architecture checked and hardened, at a minimum, to be aligned with the federal security requirements for systems deployed on the network prior to going live.” The question would be whether the processes were followed, he said. “In any case, there could be quite a checklist of issues regarding Compliance with Cybersecurity Frameworks, Best Practices, and the Federal Government’s Memo regarding the Implementation of Zero Trust, to name a few, as well as numerous privacy laws.”


Open-Source AI: Power Shift or Pandora's Box?

"This is no longer just a technological race, it’s a geopolitical one. While open-source models offer accessibility, their full training pipeline and datasets often remain undisclosed. Nations are using AI to influence global markets, trade policies and digital sovereignty," said Amitkumar Shrivastava, global distinguished engineer and head of AI at Fujitsu Consulting India. "The real winners will be those who balance innovation with regulatory foresight and ethical AI practices." While open-source AI fosters innovation, it also raises concerns about security, compliance and ethical risks. Increased accessibility introduces challenges such as misinformation, deepfake generation and unauthorized automation. "DeepSeek is open-source, which is very important, as it allows users to download the models and run them on their own hardware if they have the capacity. We are already seeing others create local installations of DeepSeek models even without GPUs," Professor Balaraman Ravindran, IIT Madras, wrote in his blog. "Assuming that DeepSeek's claims on infrastructure reductions are true, some researchers are still not fully convinced and are in the process of verifying the claims. There will be an immediate breakdown of the monopolistic hold of a few technology giants with deep pockets to control the AI market - much like India developing cheap Corona vaccine," said Dr. Sanjeev Kumar.


The Cost of AI Security

The cost of AI and its security needs is going to be an ongoing conversation for enterprise leaders. “It’s still so early in the cycle that most security organizations are trying to get their arms around what they need to protect, what’s actually different. What do [they] already have in place that can be leveraged?” says Saeedi. Who is a part of these evolving conversations? CISOs, naturally, have a leading role in defining the security controls applied to an enterprise’s AI tools, but given the growing ubiquity of AI a multistakeholder approach is necessary. Other C-suite leaders, the legal team, and the compliance team often have a voice. Saeedi is seeing cross-functional committees forming to assess AI risks, implementation, governance, and budgeting. As these teams within enterprises begin to wrap their heads around various AI security costs, the conversation needs to include AI vendors. “The really key part for any security or IT organization, when [we’re] talking with the vendor is to understand, ‘We’re going to use your AI platform but what are you going to do with our data?’” Is that vendor going to use an enterprise’s data for model training? How is that enterprise’s data secured? How does an AI vendor address the potential security risks associated with the implementation of its tool?

No comments:

Post a Comment