Showing posts with label data management. Show all posts
Showing posts with label data management. Show all posts

Daily Tech Digest - February 19, 2025


Quote for the day:

"Go confidently in the direction of your dreams. Live the life you have imagined." -– Henry David Thoreau


Why Observability Needs To Go Headless

Not all logs have long-term value, but that’s one of the advantages of headless observability and decoupled storage. Teams have the freedom and flexibility to determine which logs should be retained for longer periods. Web application firewall (WAF) and other security logs can be retained over the long term and made available to cybersecurity teams and threat hunters. Other application logs can provide long-term insights into how resources are being used for capacity planning and anomaly detection. Let’s take a closer look at a real, tangible use case where observability data can be valuable for other teams: real user monitoring (RUM). In the realm of observability, RUM allows teams to proactively monitor how end users are experiencing web applications. Issues like slow page loads can be mitigated before they frustrate users. Beyond observability, RUM data can also provide insights into how your end users are interacting with your brand and your products. This data is invaluable for marketing, advertising and leadership teams that need to plan strategy. ... As a real-world example, many enterprises use CDN log data for real user monitoring. In the short term, monitoring CDNs is important for ensuring good user experiences and fast loading times of digital assets. However, being able to retain huge volumes of log data long term and cost-effectively provides certain advantages to enterprises.


Why the CIO role should be split in two

The fact is that within enterprises, existing architecture is overly complex, often including new digital systems interconnected with legacy systems. This ‘hybrid’ architecture is a combination of best and bad practice. When there is an outage, the new digital platforms can invariably be restored to recover business process support. But because they do not operate in isolation, instead connecting with legacy technologies, business operations themselves may not fully recover if the legacy systems continue to be impacted by the outage. For most enterprises stuck in this hybrid state, the way forward is to be more discipline around architecture. ... Simplifying architecture at an enterprise level is something the CIO and CISO should work together concurrently as a shared goal. The benefits of doing so will accrue over time rather than immediately, hence there can be some reluctance to prioritize. ... What does all this have to do with my opening discussion about the CIO and complementary IT executive roles? Splitting the CIO role into smaller and smaller pieces would be okay if doing so led to better outcomes. But I would argue that examples like the ones above show that the multiple-exec approach is not a success story we should be bragging about. In this structure, the two CIOs would share ownership of the IT strategy. 


Generative AI vs. the software developer

AI is not going to turn your customer support people (Elvis bless them) into senior software developers. A customer support person might be able to think “I need to track the connection between items in inventory, the customer’s shopping cart, and the discount pricing for a given item,” but unless that person also knows how to code, they will have a seriously hard time instructing an AI model to generate the code they need. Most likely, they aren’t going to know if the code the AI produces even runs, let alone works correctly. But AI can help actual developers in many ways. It can look at existing code you have written and help you produce the next thing that you need to write. It can even write large routines and classes that you ask it to. But it is not going to create the things you need without you having a large say in what that is. You need to know how to craft a prompt to get precisely what is needed. ... Now, that prompt will be pretty effective in getting what is asked for. But the trick here, obviously, is that you have to know what a React component is, what Tailwind is, the fact that you want tests, what TypeScript is, what null is, and that you’d even need to handle missing values. There is a lot of knowledge and experience wrapped up in that prompt, and it’s not something that an inexperienced developer, or certainly a non-developer, would be able to write.


Beyond the Screen: Humanising Digital Learning

Digital learning holds a lot of promise, aiming to bring the most dynamic and engaging elements of in-person training into the digital space. Interactive tools like quizzes, breakout rooms, and mini-tasks demonstrate just how far we’ve come in replicating real-world engagement online. However, we continue to see issues with retention and follow through. Recent research shows that 66% of employees still find on-the-job learning to be more effective than formal online courses. This disconnect often stems from a lack of deep, meaningful engagement. Without it, employees are less likely to retain knowledge or apply their skills effectively in the workplace. This is particularly crucial when it comes to human skills—broader soft skills like communication, emotional intelligence, and critical thinking. Unlike technical skills that are typically learned ‘by the book’, softer skills are learned and applied every day. The solution lies in moving beyond passive consumption to real-world, interactive learning simulations. ... The shift to digital learning offers incredible potential, but realising that potential requires a thoughtful approach. By embracing AI-powered technologies and prioritising interactive, personalised and bite-sized content, organisations can create learning experiences that are engaging, practical and transformative.


Shadow AI: How unapproved AI apps are compromising security, and what you can do about it

Shadow AI introduces significant risks, including accidental data breaches, compliance violations and reputational damage. It’s the digital steroid that allows those using it to get more detailed work done in less time, often beating deadlines. Entire departments have shadow AI apps they use to squeeze more productivity into fewer hours. “I see this every week,” Vineet Arora, CTO at WinWire, recently told VentureBeat. “Departments jump on unsanctioned AI solutions because the immediate benefits are too tempting to ignore.” ... “If you paste source code or financial data, it effectively lives inside that model,” Golan warned. Arora and Golan find companies training public models defaulting to using shadow AI apps for a wide variety of complex tasks. Once proprietary data gets into a public-domain model, more significant challenges begin for any organization. It’s especially challenging for publicly held organizations that often have significant compliance and regulatory requirements. Golan pointed to the coming EU AI Act, which “could dwarf even the GDPR in fines,” and warns that regulated sectors in the U.S. risk penalties if private data flows into unapproved AI tools. There’s also the risk of runtime vulnerabilities and prompt injection attacks that traditional endpoint security and data loss prevention (DLP) systems and platforms aren’t designed to detect and stop.


Think being CISO of a cybersecurity vendor is easy? Think again

When people in this industry hear that a CISO is working at a cybersecurity vendor, it can trigger a number of assumptions — many of them misguided. There’s a stereotype that the role isn’t “real” CISO work, that it’s more akin to being a field CISO, someone primarily outward-facing and focused on supporting sales or amplifying the brand. The assumption goes something like this: How hard can it be to secure a security company, and isn’t the “real” work done at companies outside of this bubble? ... Some might think that working at a security company limits your perspective of what’s out there in the broader industry, but I found the opposite to be true. I gained a deeper understanding of how organizations evaluate security solutions and what they truly care about. I saw firsthand the challenges customers faced when implementing security tools, and that experience gave me empathy, insight, and a renewed ability to speak their language. Now that I’m back in industry, I’m bringing that perspective with me. The transition wasn’t a step “down” or a shift away from anything; it was just the next phase in my career. Security leadership is security leadership, no matter where you practice it. The challenges remain complex, the responsibilities remain vast, and the importance of aligning security with business outcomes remains paramount.


Lack of regulations, oversight in health care IT can cause harm

Increasingly, health care organizations have outsourced their health IT infrastructure to companies owned and operated by private equity, venture capital and Big Tech firms that view them as platforms to experiment with unproven AI and machine-learning tools. "The unregulated integration of AI tools into these systems will make it even harder to protect patients' rights," Appelbaum said. "Moreover, because these records contain so much information and are centralized, they are among the most lucrative targets for cyberattacks and hackers," Batt said, noting that in 2024, data breaches exposed the health records of more than 200 million Americans. As a result, health care organizations must now invest billions more in cybersecurity systems owned and operated by venture capital, private equity and Big Tech. The authors argue that the federal government is once again behind in setting safeguards for the adoption of new health IT, and that the lessons from 30 years of attempts to set adequate standards for information-sharing in electronic health systems—as detailed in these reports—should spur regulators to act quickly and rein in unregulated financial activities in health IT. Batt explained, "The history of the health IT implementation and the lack of sufficient regulatory oversight and enforcement of standards should give us great pause for the current enthusiasm over the adoption of AI and machine learning in health information systems."


The Future of Data: How Decision Intelligence is Revolutionizing Data

Decision Intelligence is an interdisciplinary field that uses AI to enhance all aspects of decision-making across all areas of a Business. It blends concepts of Data Science (statistics, machine learning, AI, analytics) with Behavioral Sciences (psychology, neuroscience, economics, and managerial sciences) to understand how decisions are made and how outcomes are measured. ... Decision Intelligence (DI) can be considered a subset where it uses AI to build a reliable data foundation by collecting, organizing, and connecting data and then applying AI and analytics to turn that data into useful insights for better decision-making. In short, while AI provides the technology to mimic human intelligence, DI focuses on applying that technology to improve how decisions are made. ... You can use any of your machine learning models, like regression models, classification models, time series forecasting models, clustering algorithms, or reinforcement learning for implementing Decision Intelligence. These machine learning will help identify patterns in the data and make predictions based on those patterns, but decision intelligence will take that information one step further by incorporating it into a broader framework that can actively guide the decision-making process by considering the predictions and the potential outcomes and consequences of different choices.


ManpowerGroup exec explains how to manage an AI workforce

It’s not just a technology anymore. We are looking for individuals that have the industry experience. We can take somebody with industry experience and train them on the technical part of the job. “It’s a lot harder for us to take somebody with the technical skills and teach them how the industry works. I think there’s a focus on looking at the soft skills: the problem solving, the complex reasoning ability, and communications. Because it’s not just developing AI for the sake of software technology; it’s to address that larger business problem. It’s about looking at all of the business functions, and taking all of that into consideration. ... The problem is [that] the gap is getting wider between those employees who understand AI technology and are willing to learn more about it and those who don’t want to have anything to do with it. But I think everybody will be a technologist, eventually. It’s going to be talent augmented by technology. ... “There are so many things, and it’s happening so fast. So, we are still learning as fast as we can. We’re trying to understand what the impact of AI will be, and how it will change our business models. Even from a talent organization like ours, which is providing global talent solutions, what does that do for us? Now, our company is going to start looking for your talent plus the AI agents you’ll need. So AI becomes part of a hiring solution. 


Debunking the AI Hype: Inside Real Hacker Tactics

While headlines are trumpeting AI as the one-size-fits-all new secret weapon for cybercriminals, the statistics—again, so far—are telling a very different story. In fact, after poring over the data, Picus Labs found no meaningful upswing in AI-based tactics in 2024. Yes, adversaries have started incorporating AI for efficiency gains, such as crafting more credible phishing emails or creating/ debugging malicious code, but they haven't yet tapped AI's transformational power in the vast majority of their attacks so far. In fact, the data from the Red Report 2025 shows that you can still thwart the majority of attacks by focusing on tried-and-true TTPs. ... Attackers are increasingly targeting password stores, browser-stored credentials, and cached logins, leveraging stolen keys to escalate privileges and spread within networks. This threefold jump underscores the urgent need for ongoing and robust credential management combined with proactive threat detection. Modern infostealer malware orchestrates multi-stage style heists blending stealth, automation, and persistence. With legitimate processes cloaking malicious operations and actual day-to-day network traffic hiding nefarious data uploads, bad actors can exfiltrate data right under your security team's proverbial nose, no Hollywood-style "smash-and-grab" needed. Think of it as the digital equivalent of a perfectly choreographed burglary. 

Daily Tech Digest - February 06, 2025


Quote for the day:

"Success is liking yourself, liking what you do, and liking how you do it." -- Maya Angelou


Here’s How Standardization Can Fix the Identity Security Problem

Fragmentation in identity security doesn’t only waste resources, it also leaves businesses exposed to threat actors, leading to potential reputational and financial damage if systems are compromised. Misconfigurations often arise when teams are pressured to deliver quickly without adequate frameworks. Fragmentation also forces teams to juggle mismatched tools, creating gaps in oversight. These gaps become weak points for attackers, leading to cascading failures. ... Standardization transforms the complexity of identity management into a straightforward, structured process. Instead of piecing together bespoke solutions, leveraging established frameworks can deliver robust, scalable and future-proof security. ... Developers often need to weigh short-term challenges against long-term gains. Adopting standardized identity frameworks is one decision where the long-term benefits are clear. Increased efficiency, security and scalability contribute to a more sustainable development process. Standardization equips us with ready-to-use solutions for essential features, freeing us to focus on innovation. It also enables applications to meet compliance requirements without added strain on teams. By investing in frameworks like IPSIE, we can future-proof our systems while reducing the burden on individual developers.


How Data Contracts Support Collaboration between Data Teams

Data contracts are what APIs are for software systems, Christ said. They are an interface specification between a data provider and their data consumers. Data contracts specify the provided data model with the syntax, format, and semantics, but also contain data quality guarantees, service-level objectives, and terms and conditions for using the data, Christ mentioned. They also define the owner of the provided data product that is responsible if there are any questions or issues, he added. Data mesh is an important driver for data contracts, as data mesh introduces distributed ownership of data products, Christ said. Before that, we usually had just one central team that was responsible for all data and BI activities, with no need to specify interfaces with other teams. ... Data providers benefit by gaining visibility into which consumers are accessing their data. Permissions can be automated accordingly, and when changes need to be implemented in a data product, a new version of the data contract can be introduced and communicated with the consumers, Christ said. With data contracts, we have very high-quality metadata, Christ said. This metadata can be further leveraged to optimize governance processes or build an enterprise data marketplace, enabling better discoverability, transparency, and automated access management across the organization to make data available for more teams.


How Agentic AI will be Weaponized for Social Engineering Attacks

To combat advanced social engineering attacks, consider building or acquiring an AI agent that can assess changes to the attack surface, detect irregular activities indicating malicious actions, analyze global feeds to detect threats early, monitor deviations in user behavior to spot insider threats, and prioritize patching based on vulnerability trends. ... Security awareness training is a non-negotiable component to bolstering human defenses. Organizations must go beyond traditional security training and leverage tools that can do things like assign engaging content to users based on risk scores and failure rates, dynamically generate quizzes and social engineering scenarios based on the latest threats, trigger bite-sized refreshers, etc. ... Human intuition and vigilance are critical in combating social engineering threats. Organizations must double down on fostering a culture of cybersecurity, educating employees on the risks of social engineering and the impact on the organization, training to identify and report such threats, and empowering them with tools that can improve security behavior. Gartner predicts that by 2028, a third of our interactions with AI will shift from simply typing commands to fully engaging with autonomous agents that can act on their own goals and intentions. Obviously, cybercriminals won’t be far behind in exploiting these advancements for their misdeeds.

As businesses expand their cloud services and integrate AI, IoT, and other digital tools, the attack surface grows exponentially. Cybercriminals are exploiting this vast surface with increasingly sophisticated tactics, including AI-driven attacks that can bypass traditional security measures. Lack of visibility across multicloud environments: Many businesses rely on a combination of private, public, and hybrid cloud solutions, which can create visibility gaps. Security teams struggle to manage and monitor resources across various platforms, making it difficult to detect vulnerabilities or respond to threats in real time. Misconfigurations and human error: Cloud misconfigurations remain one of the leading causes of data breaches. ... Ongoing risk assessments are essential for identifying vulnerabilities and understanding the potential attack vectors in cloud environments. Regular penetration testing can help organisations identify and patch security gaps proactively. These assessments, combined with continuous monitoring, ensure the security posture evolves alongside emerging threats. Centralised threat detection and response: Implementing a centralised security platform that aggregates data from multiple cloud environments can streamline threat detection and response. By correlating network events with cloud activities, security teams can gain deeper insights into potential risks and reduce the mean time to resolution (MTTR) for incidents.


Is 2025 the year of quantum computing?

As quantum computing research gradually inches toward real-world usability, you might wonder where we’ll see the impacts of this technology, both short- and long-term. One of the most immediately important areas is cryptography. Since a quantum computer can take on many states simultaneously, something like factoring large numbers can proceed in parallel, relying on the superposition of particle states to explore many possible outcomes at once. There is also a tantalizing potential for cross-over between machine learning and quantum computing. Here, the probabilistic nature of neural networks and AI in general seems to lend itself to being modeled at a more fundamental level, and with far greater efficiency, using the hardware capabilities of quantum computers. How powerful would an AI system be if it rested on quantum hardware? Another area is the development of alternative energy sources, including fusion. Using matter itself to model reality opens up possibilities we can’t yet fully predict. Drug discovery and material design are also areas of interest for quantum calculations. At the hardware level, quantum systems allow us to use matter itself to model the complexity of designing useful matter. These and other exciting developments, especially in error correction, seem to indicate quantum computing’s time is finally coming. 


The overlooked risks of poor data hygiene in AI-driven organizations

A significant risk posed by AI-enabled apps is called ‘AI oversharing,’ where enterprise applications expose sensitive information through poorly defined access controls. This is especially prevalent in retrieval-augmented generation (RAG) applications when original source permissions aren’t honoured throughout the system. Imagine for a minute if you were an enterprise with millions of documents that contain decades of enterprise knowledge and you wanted to leverage AI through a RAG-based architecture. A typical approach is to load all of those documents into a vector database. If you exposed that data through an AI chatbot without honouring the original permissions on those documents, then anyone issuing a prompt could access any of that data. ... Organizations need to implement a methodical process for assessing and preparing data for AI applications, as sophisticated attacks like prompt injection and unauthorized data access become more prevalent. Begin with a thorough inventory of your data stores, including file and documents stores, support and ticketing system, and any other data sources that you’ll source your enterprise data from. Then work to understand its potential use in AI applications and identify critical gaps or inconsistencies. 


Who Is Attacking Smart Factories? Understanding the Evolving Threat Landscape

Cybercriminals no longer rely on broad, generalized attacks but have begun to tailor their malware specifically for OT systems. For example, they know which files on engineering workstations or MES systems are most important for production and will specifically target them for encryption. This shift has also seen an increase in multi-vector attacks. Attackers might gain initial access through phishing emails but, once inside, use tools that enable them to move seamlessly between IT and OT networks. The goal is no longer just to hold data hostage but to encrypt or destroy files that are crucial to the manufacturing process. With this targeted approach, attackers increase the likelihood that companies will pay the ransom, especially when systems critical to production are held hostage. ... The increasing sophistication of these attacks highlights the need for manufacturers to adopt a holistic approach to cybersecurity. While technical countermeasures like firewalls, endpoint security, and intrusion detection systems are important, they are not enough on their own. A comprehensive security strategy must address both IT and OT environments and recognize the interdependence between these systems. Manufacturers should focus on risk assessment across their entire value chain, from the factory floor to the supply chain and customer-facing systems. 


Legislators demand truth about OPM email server

Erik Avakian, security counselor at Info-Tech Research Group said the “recent development regarding OPM and the alleged issues regarding an email server being deployed on the agency network and emails being distributed by the agency to federal employees raise potential security and privacy concerns that, if substantiated, could be out of sync with well-defined cybersecurity best practices and privacy regulations.” Most important, he said, would be the way in which the system had been deployed onto the federal network, “particularly in light of the many existing US federal government-required processes, procedures, and checks a system would need to undergo before receiving green light approval for such a fast-tracked deployment. There could be fast-track processes in place for such instances.” However, even in such cases, said Avakian, “any deployment of systems or tools would certainly, as best practice, need to be reviewed for security vulnerabilities, and its architecture checked and hardened, at a minimum, to be aligned with the federal security requirements for systems deployed on the network prior to going live.” The question would be whether the processes were followed, he said. “In any case, there could be quite a checklist of issues regarding Compliance with Cybersecurity Frameworks, Best Practices, and the Federal Government’s Memo regarding the Implementation of Zero Trust, to name a few, as well as numerous privacy laws.”


Open-Source AI: Power Shift or Pandora's Box?

"This is no longer just a technological race, it’s a geopolitical one. While open-source models offer accessibility, their full training pipeline and datasets often remain undisclosed. Nations are using AI to influence global markets, trade policies and digital sovereignty," said Amitkumar Shrivastava, global distinguished engineer and head of AI at Fujitsu Consulting India. "The real winners will be those who balance innovation with regulatory foresight and ethical AI practices." While open-source AI fosters innovation, it also raises concerns about security, compliance and ethical risks. Increased accessibility introduces challenges such as misinformation, deepfake generation and unauthorized automation. "DeepSeek is open-source, which is very important, as it allows users to download the models and run them on their own hardware if they have the capacity. We are already seeing others create local installations of DeepSeek models even without GPUs," Professor Balaraman Ravindran, IIT Madras, wrote in his blog. "Assuming that DeepSeek's claims on infrastructure reductions are true, some researchers are still not fully convinced and are in the process of verifying the claims. There will be an immediate breakdown of the monopolistic hold of a few technology giants with deep pockets to control the AI market - much like India developing cheap Corona vaccine," said Dr. Sanjeev Kumar.


The Cost of AI Security

The cost of AI and its security needs is going to be an ongoing conversation for enterprise leaders. “It’s still so early in the cycle that most security organizations are trying to get their arms around what they need to protect, what’s actually different. What do [they] already have in place that can be leveraged?” says Saeedi. Who is a part of these evolving conversations? CISOs, naturally, have a leading role in defining the security controls applied to an enterprise’s AI tools, but given the growing ubiquity of AI a multistakeholder approach is necessary. Other C-suite leaders, the legal team, and the compliance team often have a voice. Saeedi is seeing cross-functional committees forming to assess AI risks, implementation, governance, and budgeting. As these teams within enterprises begin to wrap their heads around various AI security costs, the conversation needs to include AI vendors. “The really key part for any security or IT organization, when [we’re] talking with the vendor is to understand, ‘We’re going to use your AI platform but what are you going to do with our data?’” Is that vendor going to use an enterprise’s data for model training? How is that enterprise’s data secured? How does an AI vendor address the potential security risks associated with the implementation of its tool?

Daily Tech Digest - January 31, 2025


Quote for the day:

“If you genuinely want something, don’t wait for it–teach yourself to be impatient.” -- Gurbaksh Chahal


GenAI fueling employee impersonation with biometric spoofs and counterfeit ID fraud

The annual AuthenticID report underlines the surging wave of AI-powered identity fraud, with rising biometric spoofs and counterfeit ID fraud attempts. The 2025 State of Identity Fraud Report also looks at how identity verification tactics and technology innovations are tackling the problem. “In 2024, we saw just how sophisticated fraud has now become: from deepfakes to sophisticated counterfeit IDs, generative AI has changed the identity fraud game,” said Blair Cohen, AuthenticID founder and president. ... “In 2025, businesses should embrace the mentality to ‘think like a hacker’ to combat new cyber threats,” said Chris Borkenhagen, chief digital officer and information security officer at AuthenticID. “Staying ahead of evolving strategies such as AI deepfake-generated documents and biometrics, emerging technologies, and bad actor account takeover tactics are crucial in protecting your business, safeguarding data, and building trust with customers.” ... Face biometric verification company iProov has identified the Philippines as a particular hotspot for digital identity fraud, with corresponding need for financial institutions and consumers to be vigilant. “There is a massive increase at the moment in terms of identity fraud against systems using generative AI in particular and deepfakes,” said iProove chief technology officer Dominic Forrest.


Cyber experts urge proactive data protection strategies

"Every organisation must take proactive measures to protect the critical data it holds," Montel stated. Emphasising foundational security practices, he advised organisations to identify their most valuable information and protect potential attack paths. He noted that simple steps can drastically contribute to overall security. On the consumer front, Montel highlighted the pervasive nature of data collection, reminding individuals of the importance of being discerning about the personal information they share online. "Think before you click," he advised, underscoring the potential of openly shared public information to be exploited by cybercriminals. Adding to the discussion on data resilience, Darren Thomson, Field CTO at Commvault, emphasised the changing landscape of cyber defence and recovery strategies needed by organisations. Thompson pointed out that mere defensive measures are not sufficient; rapid recovery processes are crucial to maintain business resilience in the event of a cyberattack. The concept of a "minimum viable company" is pivotal, where businesses ensure continuity of essential operations even when under attack. With cybercriminal tactics becoming increasingly sophisticated, doing away with reliance solely on traditional backups is necessary. 


Trump Administration Faces Security Balancing Act in Borderless Cyber Landscape

The borderless nature of cyber threats and AI, the scale of worldwide commerce, and the globally interconnected digital ecosystem pose significant challenges that transcend partisanship. As recent experience makes us all too aware, an attack originating in one country, state, sector, or company can spread almost instantaneously, and with devastating impact. Consequently, whatever the ideological preferences of the Administration, from a pragmatic perspective cybersecurity must be a collaborative national (and international) activity, supported by regulations where appropriate. It’s an approach taken in the European Union, whose member states are now subject to the Second Network Information Security Directive (NIS2)—focused on critical national infrastructure and other important sectors—and the financial sector-focused Digital Operational Resilience Act (DORA). Both regulations seek to create a rising tide of cyber resilience that lifts all ships and one of the core elements of both is a focus on reporting and threat intelligence sharing. In-scope organizations are required to implement robust measures to detect cyber attacks, report breaches in a timely way, and, wherever possible, share the information they accumulate on threats, attack vectors, and techniques with the EU’s central cybersecurity agency (ENISA).


Infrastructure as Code: From Imperative to Declarative and Back Again

Today, tools like Terraform CDK (TFCDK) and Pulumi have become popular choices among engineers. These tools allow developers to write IaC using familiar programming languages like Python, TypeScript, or Go. At first glance, this is a return to imperative IaC. However, under the hood, they still generate declarative configurations — such as Terraform plans or CloudFormation templates — that define the desired state of the infrastructure. Why the resurgence of imperative-style interfaces? The answer lies in a broader trend toward improving developer experience (DX), enabling self-service, and enhancing accessibility. Much like the shifts we’re seeing in fields such as platform engineering, these tools are designed to streamline workflows and empower developers to work more effectively. ... The current landscape represents a blending of philosophies. While IaC tools remain fundamentally declarative in managing state and resources, they increasingly incorporate imperative-like interfaces to enhance usability. The move toward imperative-style interfaces isn’t a step backward. Instead, it highlights a broader movement to prioritize developer accessibility and productivity, aligning with the emphasis on streamlined workflows and self-service capabilities.


How to Train AI Dragons to Solve Network Security Problems

We all know AI’s mantra: More data, faster processing, large models and you’re off to the races. But what if a problem is so specific — like network or DDoS security — that it doesn’t have a lot of publicly or privately available data you can use to solve it? As with other AI applications, the quality of the data you feed an AI-based DDoS defense system determines the accuracy and effectiveness of its solutions. To train your AI dragon to defend against DDoS attacks, you need detailed, real-world DDoS traffic data. Since this data is not widely and publicly available, your best option is to work with experts who have access to this data or, even better, have analyzed and used it to train their own AI dragons. To ensure effective DDoS detection, look at real-world, network-specific data and global trends as they apply to the network you want to protect. This global perspective adds valuable context that makes it easier to detect emerging or worldwide threats. ... Predictive AI models shine when it comes to detecting DDoS patterns in real-time. By using machine learning techniques such as time-series analysis, classification and regression, they can recognize patterns of attacks that might be invisible to human analysts. 


How law enforcement agents gain access to encrypted devices

When a mobile device is seized, law enforcement can request the PIN, password, or biometric data from the suspect to access the phone if they believe it contains evidence relevant to an investigation. In England and Wales, if the suspect refuses, the police can give a notice for compliance, and a further refusal is in itself a criminal offence under the Regulation of Investigatory Powers Act (RIPA). “If access is not gained, law enforcement use forensic tools and software to unlock, decrypt, and extract critical digital evidence from a mobile phone or computer,” says James Farrell, an associate at cyber security consultancy CyXcel. “However, there are challenges on newer devices and success can depend on the version of operating system being used.” ... Law enforcement agencies have pressured companies to create “lawful access” solutions, particularly on smartphones, to take Apple as an example. “You also have the co-operation of cloud companies, which if backups are held can sidestep the need to break the encryption of a device all together,” Closed Door Security’s Agnew explains. The security community has long argued against law enforcement backdoors, not least because they create security weaknesses that criminal hackers might exploit. “Despite protests from law enforcement and national security organizations, creating a skeleton key to access encrypted data is never a sensible solution,” CreateFuture’s Watkins argues.


The quantum computing reality check

Major cloud providers have made quantum computing accessible through their platforms, which creates an illusion of readiness for enterprise adoption. However, this accessibility masks a fatal flaw: Most quantum computing applications remain experimental. Indeed, most require deep expertise in quantum physics and specialized programming knowledge. Real-world applications are severely limited, and the costs are astronomical compared to the actual value delivered. ... The timeline to practical quantum computing applications is another sobering reality. Industry experts suggest we’re still 7 to 15 years away from quantum systems capable of handling production workloads. This extended horizon makes it difficult to justify significant investments. Until then, more immediate returns could be realized through existing technologies. ... The industry’s fascination with quantum computing has made companies fear being left behind or, worse, not being part of the “cool kids club”; they want to deliver extraordinary presentations to investors and customers. We tend to jump into new trends too fast because the allure of being part of something exciting and new is just too compelling. I’ve fallen into this trap myself. ... Organizations must balance their excitement for quantum computing with practical considerations about immediate business value and return on investment. I’m optimistic about the potential value in QaaS. 


Digital transformation in banking: Redefining the role of IT-BPM services

IT-BPM services are the engine of digital transformation in banking. They streamline operations through automation technologies like RPA, enhancing efficiency in processes such as customer onboarding and loan approvals. This automation reduces errors and frees up staff for strategic tasks like personalised customer support. By harnessing big data analytics, IT-BPM empowers banks to personalise services, detect fraud, and make informed decisions, ultimately improving both profitability and customer satisfaction. Robust security measures and compliance monitoring are also integral, ensuring the protection of sensitive customer data in the increasingly complex digital landscape. ... IT-BPM services are crucial for creating seamless, multi-channel customer experiences. They enable the development of intuitive platforms, including AI-driven chatbots and mobile apps, providing instant support and convenient financial management. This focus extends to personalised services tailored to individual customer needs and preferences, and a truly integrated omnichannel experience across all banking platforms. Furthermore, IT-BPM fosters agility and innovation by enabling rapid development of new digital products and services and facilitating collaboration with fintech companies.


Revolutionizing data management: Trends driving security, scalability, and governance in 2025

Artificial Intelligence and Machine Learning transform traditional data management paradigms by automating labour-intensive processes and enabling smarter decision-making. In the upcoming years, augmented data management solutions will drive efficiency and accuracy across multiple domains, from data cataloguing to anomaly detection. AI-driven platforms process vast datasets to identify patterns, automating tasks like metadata tagging, schema creation and data lineage mapping. ... In 2025, data masking will not be merely a compliance tool for GDPR, HIPPA, or CCPA; it will be a strategic enabler. With the rise in hybrid and multi-cloud environments, businesses will increasingly need to secure sensitive data across diverse systems. Specific solutions like IBM, K2view, Oracle and Informatica will revolutionize data masking by offering scale-based, real-time, context-aware masking. ... Real-time integration enhances customer experiences through dynamic pricing, instant fraud detection, and personalized recommendations. These capabilities rely on distributed architectures designed to handle diverse data streams efficiently. The focus on real-time integration extends beyond operational improvements. 


Deploying AI at the edge: The security trade-offs and how to manage them

The moment you bring compute nodes into the far edge, you’re automatically exposing a lot of security challenges in your network. Even if you expect them to be “disconnected devices,” they could intermittently connect to transmit data. So, your security footprint is expanded. You must ensure that every piece of the stack you’re deploying at the edge is secure and trustworthy, including the edge device itself. When considering security for edge AI, you have to think about transmitting the trained model, runtime engine, and application from a central location to the edge, opening up the opportunity for a person-in-the-middle attack. ... In military operations, continuous data streams from millions of global sensors generate an overwhelming volume of information. Cloud-based solutions are often inadequate due to storage limitations, processing capacity constraints, and unacceptable latency. Therefore, edge computing is crucial for military applications, enabling immediate responses and real-time decision-making. In commercial settings, many environments lack reliable or affordable connectivity. Edge AI addresses this by enabling local data processing, minimizing the need for constant communication with the cloud. This localized approach enhances security. Instead of transmitting large volumes of raw data, only essential information is sent to the cloud. 


Daily Tech Digest - January 14, 2025

Why Your Business May Want to Shift to an Industry Cloud Platform

Industry cloud services typically embed the data model, processes, templates, accelerators, security constructs, and governance controls required by the adopter's industry, says Shriram Natarajan, a director at technology research and advisory firm ISG, in an online interview. "This [approach] allows faster development of new functionality, better security and governance, and an enhanced and user/stakeholder experience." ... Enterprises spanning many industries can benefit significantly by moving to an industry cloud platform, Campbell says. "Businesses that are faced with many regulations and operational requirements can especially benefit from the specialized services industry cloud platforms," he notes, adding that many industry cloud platforms are preconfigured to meet specific needs, which can help accelerate the time to value realized. Many enterprises have a blinkered view on verticalized solutions, Natarajan says. "They tend to see the platforms they already have in-house and look for solutions that these platforms provide." He believes that enterprise IT and business teams can both benefit from looking at the landscape of verticalized industry cloud platforms.


FRAML Reality Check: Is Full Integration Really Practical?

While integration between AML and fraud teams is a desirable goal, experts say it should not be viewed as the best solution. Paul Dunlop, insider risk consultant at a financial services firm, stressed the importance of collaboration over integration. "I am against the oversimplification of fraud and AML integration. Banking risks are multifaceted, involving not just fraud and AML but also cybersecurity, privacy and other domains," Dunlop said. "Integration decision should be assessed based on the bank's maturity level, regulatory environment and unique operational needs." "Cost should not be the sole factor behind this decision. One must assess operational and risk management trade-offs," he said. Meng Liu, senior analyst at Forrester, said that despite AML and fraud being two distinct functions at present, the trend toward more consolidated and integrated financial crime management is real. ... Despite the differences in fraud and AML teams, some use cases, such as scams, human trafficking and child exploitation, cry out for better collaboration, Mitchell said. "These require shared data and aligned strategies." But high-volume fraud detection such as check and card fraud is less suited for joint efforts due to operational complexity.


Ransomware abuses Amazon AWS feature to encrypt S3 buckets

In the attacks by Codefinger, the threat actors used compromised AWS credentials to locate victim's keys with 's3:GetObject' and 's3:PutObject' privileges, which allow these accounts to encrypt objects in S3 buckets through SSE-C. The attacker then generates an encryption key locally to encrypt the target's data. Since AWS doesn't store these encryption keys, data recovery without the attacker's key is impossible, even if the victim reports unauthorized activity to Amazon. "By utilizing AWS native services, they achieve encryption in a way that is both secure and unrecoverable without their cooperation," explains Halcyon. Next, the attacker sets a seven-day file deletion policy using the S3 Object Lifecycle Management API and drops ransom notes on all affected directories that instruct the victim to pay ransom on a given Bitcoin address in exchange for the custom AES-256 key. ... Halcyon also suggests that AWS customers set restrictive policies that prevent the use of SSE-C on their S3 buckets. Concerning AWS keys, unused keys should be disabled, active ones should be rotated frequently, and account permissions should be kept at the minimum level required.


How AI and ML are transforming digital banking security

By continuously learning from new data, ML improves over time, adapting to the organization’s needs and the ever-evolving fraud tactics. This supports reducing false positives, ensuring legitimate transactions proceed smoothly while maintaining security. Predictive analytics also help identify potential threats before they materialize, and fraud scoring prioritizes high-risk activities for action. AI/ML-powered systems are scalable and effective against sophisticated threats, such as synthetic identity fraud and account takeovers, and can monitor multiple banking channels simultaneously. They automate detection, lowering operational costs, and providing seamless customer experiences, thereby enhancing trust. However, nothing is a silver bullet and considerations must be made to things such as algorithm bias, data privacy concerns, and the need for explainable models persist. Still, despite these potential hurdles, AI and ML are reshaping digital banking security, equipping financial institutions with proactive tools to counter fraud while safeguarding customer trust and regulatory compliance. ... Advanced technologies like AI and ML are helping institutions monitor transactions in real time, detecting anomalies and preventing fraud without directly involving users. Meanwhile, encryption and tokenization protect sensitive data, ensuring transactions remain secure in the background.


The Evolution of Business Systems in the Digital Era

Systems of Record (SORs) serve as the foundation of organizational infrastructure, storing essential data such as customer information, financial transactions, and operational processes. These systems are designed to maintain structured and reliable records, ensuring data integrity, compliance, and security. They play a critical role in regulatory reporting, audits, and operational consistency. ... Systems of Engagement (SOEs) are the digital front doors of modern businesses, facilitating seamless and interactive communication with customers and employees. They go beyond simple data storage and retrieval, focusing on creating dynamic and personalized experiences across various channels. SOEs prioritize customer-centric approaches, ensuring businesses can deliver dynamic and interactive communication. ... Systems of Intelligence (SOIs) represent the pinnacle of data-driven decision making. Built upon the foundation of Systems of Record (SORs) and Systems of Engagement (SOEs), SOIs leverage the power of artificial intelligence (AI) and machine learning (ML) to transform raw data into actionable insights. Unlike their predecessors, SOIs go beyond simply identifying patterns and trends. They possess the ability to predict future outcomes and even prescribe optimal courses of action.


Gen AI strategies put CISOs in a stressful bind

One of the most problematic gen AI issues CISOs face is how casual many gen AI vendors are being when selecting the data used to train their models, Townsend said. “That creates a security risk for the organization.” ... generative AI’s penetration into SaaS solutions makes this more problematic. “The attack surface for gen AI has changed. It used to be enterprise users using foundation models provided by the biggest providers. Today, hundreds of SaaS applications have embedded LLMs that are in use across the enterprise,” said Routh, who today serves as chief trust officer at security vendor Saviynt. “Software engineers have more than 1 million open source LLMs at their disposal on HuggingFace.com.” ... All this can take a psychological toll on CISOs, Townsend surmised. “When they feel overwhelmed, they shut down,” he said. “They do what they feel they can, and they will ignore what they feel that they can’t control.” ... “The bad actors are feverishly working to exploit these new technologies in malicious ways, so the CISOs are right to be concerned about how these new gen AI solutions and systems can be exploited,” Taylor said. 


How Enterprises and Startups Can Master AI With Smarter Data Practices

For enterprises, however, supplying AI systems with the data they need to thrive is more complicated by several orders of magnitude. There are two main reasons for this: First, enterprises don’t have the same information aggregation ability in the consumer AI world. Consumer AI companies can use any public data on the web to train their AI models; think of it as an entire continent of information to which they have unfettered access. On the other hand, enterprise data exists within minor, disparate, and oftentimes disconnected information archipelagos. Additionally, enterprises are working with many types of data, including relational data from operational systems, decades of poorly organized folders of documents, and audio and numeric data from payroll and financial systems. Further, enterprises must contend with additional layers of regulatory complexity regarding handling personal and private data. To build impactful AI tools, an enterprise’s algorithms must be fed or trained on specific data sets that span multiple sources, including the company’s human resources, finance, customer relationship management, supply chain management, and other systems.


Yes, you should use AI coding assistants—but not like that

AI is a must for software developers, but not because it removes work. Rather, it changes how developers should work. For those who just entrust their coding to a machine, well, the results are dire. ... Use AI wrong and things get worse, not better. Stanford researcher Yegor Denisov-Blanch notes that his team has found that AI increases both the amount of code delivered and the amount of code that needs reworking, which means that “actual ‘useful delivered code’ doesn’t always increase” with AI. In short, “some people manage to be less productive with AI.” So how do you ensure you get more done with coding assistants, not less? ... Here’s the solution: If you want to use AI coding assistants, don’t use them as an excuse not to learn to code. The robots aren’t going to do it for you. The engineers who will get the most out of AI assistants are those who know software best. They’ll know when to give control to the coding assistant and how to constrain that assistance (perhaps to narrow the scope of the problem they allow it to work on). Less-experienced engineers run the risk of moving fast but then getting stuck or not recognizing the bugs that the AI has created. ... AI can’t replace good programming, because it really doesn’t do good programming.


AI Tools Amplify API Security Threats Worldwide

The financial implications of API breaches prove substantial. According to Kong's report, 55% of organizations experienced an API security incident in the past year. Among those affected, 47% reported remediation costs exceeding $100,000, while 20% faced expenses surpassing $500,000. Gartner's research underscores this urgency, highlighting that API breaches typically result in ten times more leaked data than other types of security incidents. ... While AI technologies, particularly LLMs, drive unprecedented innovation, they introduce new vulnerabilities. These advanced tools enable attackers to exploit shadow APIs, bypass traditional defenses and manipulate API traffic in unexpected ways. The survey indicates that 84% of leaders predict AI and LLMs will increase the complexity of securing APIs over the next two to three years, emphasizing the need for immediate action. Despite 92% of organizations implementing measures to secure their APIs, 40% of leaders remain skeptical about whether their investments will adequately counter AI-driven risks. The regional disparity in preparedness stands out: 13% of U.S. organizations acknowledge taking no specific measures against AI threats, compared to 4% in the U.K.


From AI Assistants to Swarms of Thousands of Collaborating AI Agents: Is Your Architecture Ready?

Agentic AI is likely to create most issues in some areas more than others. The Agentic Architecture Framework identifies seven areas that will require more support in the forms of new or updated frameworks, tools and techniques to support Agentic AI capability-building and architecture development. ... Agentic AI Strategy begins with defining a clear target state across the Agentic AI maturity dimensions and levels. This step establishes the organization’s AI aspirations and provides a benchmark for future transformation. Once the target state is identified, the next step involves conducting a GAP analysis to determine the differences between current capabilities in the previous step, and the organization’s ambition. With these gaps clarified, organizations can then focus on identifying and quantifying high-impact AI use cases that align with business objectives and support progression toward the target state. ... The Agentic AI Operating Model defines how AI systems, people, and processes work together to deliver value. It focuses on integrating AI into the organization’s core operations, ensuring that AI agents operate seamlessly within new and existing workflows and alongside human teams.



Quote for the day:

"No person can be a great leader unless he takes genuine joy in the successes of those under him." -- W. A. Nance

Daily Tech Digest - December 30, 2024

Top Considerations To Keep In Mind When Designing Your Enterprise Observability Framework

Observability goes beyond traditional monitoring tools, offering a holistic approach that aggregates data from diverse sources to provide actionable insights. While Application Performance Monitoring (APM) once sufficed for tracking application health, the increasing complexity of distributed, multi-cloud environments has made it clear that a broader, more integrated strategy is essential. Modern observability frameworks now focus on real-time analytics, root cause identification, and proactive risk mitigation. ... Business optimization and cloud modernization often face resistance from teams and stakeholders accustomed to existing tools and workflows. To overcome this, it’s essential to clearly communicate the motivations behind adopting a new observability strategy. Aligning these motivations with improved customer experiences and demonstrable ROI helps build organizational buy-in. Stakeholders are more likely to support changes when the outcomes directly benefit customers and contribute to business success. ... Enterprise observability systems must manage vast volumes of data daily, enabling near real-time analysis to ensure system reliability and performance. While this task can be costly and complex, it is critical for maintaining operational stability and delivering seamless user experiences.


Blown the cybersecurity budget? Here are 7 ways cyber pros can save money

David Chaddock, managing director, cybersecurity, at digital services firm West Monroe, advises CISOs to start by ensuring or improving their cyber governance to “spread the accountability to all the teams responsible for securing the environment.” “Everyone likes to say that the CISO is responsible and accountable for security, but most times they don’t own the infrastructure they’re securing or the budget for doing the maintenance, they don’t have influence over the applications with the security vulnerabilities, and they don’t control the resources to do the security work,” he says. ... Torok, Cooper and others acknowledge that implementing more automation and AI capabilities requires an investment. However, they say the investments can deliver returns (in increased efficiencies as well as avoided new salary costs) that exceed the costs to buy, deploy and run those new security tools. ... Ulloa says he also saves money by avoiding auto-renewals on contracts – thereby ensuring he can negotiate with vendors before inking the next deal. He acknowledges missing one contract set on auto renew and got stuck with a 54% increase. “That’s why you have to have a close eye on those renewals,” he adds.


7 Key Data Center Security Trends to Watch in 2025

Historically, securing both types of environments in a unified way was challenging because cloud security tools worked differently from the on-prem security solutions designed for data centers, and vice versa. Hybrid cloud frameworks, however, are helping to change this. They offer a consistent way of enforcing access controls and monitoring for security anomalies across both public cloud environments and workloads hosted in private data centers. Building a hybrid cloud to bring consistency to security and other operations is not a totally new idea. ... Edge data centers can help to boost workload performance by locating applications and data closer to end-users. But they also present some unique security challenges, due especially to the difficulty of ensuring physical security for small data centers in areas that lack traditional physical security protections. Nonetheless, as businesses face greater and greater pressure to optimize performance, demand for edge data centers is likely to grow. This will likely lead to greater investment in security solutions for edge data centers. ... Traditionally, data center security strategies typically hinged on establishing a strong perimeter and relying on it to prevent unauthorized access to the facility. 


What we talk about when we talk about ‘humanness’

Civic is confident enough in its mission to know where to draw the line between people and agglomerations of data. It says that “personhood is an inalienable human right which should not be confused with our digital shadows, which ultimately are simply tools to express that personhood.” Yet, there are obvious cognitive shifts going on in how we as humans relate to machines and their algorithms, and define ourselves against them. In giving an example of how digital identity and digital humanness diverge, Civic notes “AI agents will have a digital identity and may execute actions on behalf of their owners, but themselves may not have a proof of personhood.” The implication is startling: algorithms are now understood to have identities, or to possess the ability to have them. The linguistic framework for how we define ourselves is no longer the exclusive property of organic beings. ... There is a paradox in making the simple fact of being human contingent on the very machines from which we must be differentiated. In a certain respect, asking someone to justify and prove their own fundamental understanding of reality is a kind of existential gaslighting, tugging at the basic notion that the real and the digital are separate realms.


Revolutionizing Oil & Gas: How IIoT and Edge Computing are Driving Real-Time Efficiency and Cutting Costs

Maintenance is a significant expense in oil and gas operations, but IIoT and edge computing are helping companies move from reactive maintenance to predictive maintenance models. By continuously monitoring the health of equipment through IIoT sensors, companies can predict failures before they happen, reducing costly unplanned shutdowns. ... In an industry where safety is paramount, IIoT and edge computing also play a critical role in mitigating risks to both personnel and the environment. Real-time environmental monitoring, such as gas leak detection or monitoring for unsafe temperature fluctuations, can prevent accidents and minimize the impact of any potential hazards. Consider the implementation of smart sensors that monitor methane leaks at offshore rigs. By analyzing this data at the edge, systems can instantly notify operators if any leaks exceed safe thresholds. This rapid response helps prevent harmful environmental damage and potential regulatory fines while also protecting workers’ safety. ... Scaling oil and gas operations while maintaining performance is often a challenge. However, IIoT and edge computing’s ability to decentralize data processing makes it easier for companies to scale up operations without overloading their central servers. 


Gain Relief with Strategic Secret Governance

Incorporating NHI management into cybersecurity strategy provides comprehensive control over cloud security. This approach enables businesses to extensively decrease the risk of security breaches and data leaks, creating a sense of relief in our increasingly digital age. With cloud services growing rapidly, the need for effective NHIs and secrets management is more critical than ever. A study by IDC predicts that by 2025, there will be a 3-fold increase in the data volumes in the digital universe, with 49% of this data residing in the cloud. NHI management is not limited to a single industry or department. It is applicable across financial services, healthcare, travel, DevOps, and SOC teams. Any organization working in the cloud can benefit from this strategic approach. As businesses continue to digitize, NHIs and secrets management become increasingly relevant. Adapting to effectively manage these elements can bring relief to businesses from the overwhelming task of cyber threats, offering a more secure, efficient, and compliant operational environment. ... The application of NHI management is not confined to singular industries or departments. It transcends multiple sectors, including healthcare, financial services, travel industries, and SOC teams. 


Five breakthroughs that make OpenAI’s o3 a turning point for AI — and one big challenge

OpenAI’s o3 model introduces a new capability called “program synthesis,” which enables it to dynamically combine things that it learned during pre-training—specific patterns, algorithms, or methods—into new configurations. These things might include mathematical operations, code snippets, or logical procedures that the model has encountered and generalized during its extensive training on diverse datasets. Most significantly, program synthesis allows o3 to address tasks it has never directly seen in training, such as solving advanced coding challenges or tackling novel logic puzzles that require reasoning beyond rote application of learned information. ... One of the most groundbreaking features of o3 is its ability to execute its own Chains of Thought (CoTs) as tools for adaptive problem-solving. Traditionally, CoTs have been used as step-by-step reasoning frameworks to solve specific problems. OpenAI’s o3 extends this concept by leveraging CoTs as reusable building blocks, allowing the model to approach novel challenges with greater adaptability. Over time, these CoTs become structured records of problem-solving strategies, akin to how humans document and refine their learning through experience. This ability demonstrates how o3 is pushing the frontier in adaptive reasoning.


Multitenant data management with TiDB

The foundation of TiDB’s architecture is its distributed storage layer, TiKV. TiKV is a transactional key-value storage engine that shards data into small chunks, each represented as a split. Each split is replicated across multiple nodes in the cluster using the Raft consensus algorithm to ensure data redundancy and fault tolerance. The sharding and resharding processes are handled automatically by TiKV, operating independently from the application layer. This automation eliminates the operational complexity of manual sharding—a critical advantage especially in complex, multitenant environments where manual data rebalancing would be cumbersome and error-prone. ... In a multitenant environment, where a single component failure could affect numerous tenants simultaneously, high availability is critical. TiDB’s distributed architecture directly addresses this challenge by minimizing the blast radius of potential failures. If one node fails, others take over, maintaining continuous service across all tenant workloads. This is especially important for business-critical applications where uptime is non-negotiable. TiDB’s distributed storage layer ensures data redundancy and fault tolerance by automatically replicating data across multiple nodes.


Deconstructing DevSecOps

Time and again I am reminded that there is a limit to how far collaboration can take a team. This can be because either another team has a limit to how much resources it is willing to allocate, or it is incapable of contributing regardless of its resources offered. This is often the case with cyber teams that haven't restructured or adapted the training of their personnel to support DevSecOps. To often these types are policy wonks that will happily redirect you to help desk instead of assisting anyone. Another huge problem is with tooling ecosystem itself. While DevOps has an embarrassment of riches in open source tooling, DevSecOps instead has an endless number of licensing fees awaiting. Worse yet, many of these tools are only designed to common security issues in code. This is still better than nothing but it is pretty underwhelming when you are responsible for remediating the shear number of redundant (or duplicate) findings that have no bearing. Once an organization begins to implement DevSecOps it can quickly spiral. This happens when the organization is unable to determine what is acceptable risk any longer. Once this happens any rapid prototyping capability will just not be allowed at this point.


Machine identities are the next big target for attackers

“Attackers are now actively exploring cloud native infrastructure,” said Kevin Bocek, Chief Innovation Officer at Venafi, a CyberArk Company. “A massive wave of cyberattacks has now hit cloud native infrastructure, impacting most modern application environments. To make matters worse, cybercriminals are deploying AI in various ways to gain unauthorized access and exploiting machine identities using service accounts on a growing scale. The volume, variety and velocity of machine identities are becoming an attacker’s dream.” ... “There is huge potential for AI to transform our world positively, but it needs to be protected,” Bocek continues. “Whether it’s an attacker sneaking in and corrupting or even stealing a model, a cybercriminal impersonating an AI to gain unauthorized access, or some new form of attack we have not even thought of, security teams need to be on the front foot. This is why a kill switch for AI – based on the unique identity of individual models being trained, deployed and run – is more critical than ever.” ... 83% think having multiple service accounts also creates a lot of added complexity, but most (91%) agree that service accounts make it easier to ensure that policies are uniformly defined and enforced across cloud native environments.



Quote for the day:

"Don't wait for the perfect moment take the moment and make it perfect." -- Aryn Kyle