Showing posts with label PCI SSF. Show all posts
Showing posts with label PCI SSF. Show all posts

Daily Tech Digest - January 19, 2025

Service as Software: How AI Agents Are Transforming SaaS

SaaS empowered users across industries by providing the tools and intelligence to make informed decisions. But it has always stopped short of execution. Lawyers, radiologists, tax consultants, and other service providers rely on SaaS to make decisions, but they remain responsible for the last-mile activity. Service as Software closes this gap. Agents powered by capable LLMs and integrated with existing APIs — and even SaaS platforms — don’t just inform users, they take action on their behalf. Instead of providing tools for human service providers, Service as Software directly delivers outcomes. This transformation is more than technological — it’s economic. ... Enterprises considering transitioning from SaaS to Service as Software often begin by examining which tasks would yield the most value from automation. These tasks are typically repetitive, time-sensitive, or error-prone when conducted manually. Introducing an intelligent agent that can monitor data streams, evaluate decision rules and initiate final actions may require augmenting existing infrastructure — for instance, adding webhooks, implementing new API endpoints, or integrating a rules engine.


Anthropomorphizing AI: Dire consequences of mistaking human-like for human have already emerged

Perhaps the most dangerous aspect of anthropomorphizing AI is how it masks the fundamental differences between human and machine intelligence. While some AI systems excel at specific types of reasoning and analytical tasks, the large language models (LLMs) that dominate today’s AI discourse — and that we focus on here — operate through sophisticated pattern recognition. These systems process vast amounts of data, identifying and learning statistical relationships between words, phrases, images and other inputs to predict what should come next in a sequence. When we say they “learn,” we’re describing a process of mathematical optimization that helps them make increasingly accurate predictions based on their training data. ... One critical area where anthropomorphizing creates risk is content generation and copyright compliance. When businesses view AI as capable of “learning” like humans, they might incorrectly assume that AI-generated content is automatically free from copyright concerns. ... One of the most concerning costs is the emotional toll of anthropomorphizing AI. We see increasing instances of people forming emotional attachments to AI chatbots, treating them as friends or confidants.


Building Secure Software - Integrating Security in Every Phase of the SDLC

A common problem in software development is that security related activities are left out or deferred until the final testing phase, which is too late in the SDLC after most of the critical design and implementation has been completed. Besides, the security checks performed during the testing phase can be superficial, limited to scanning and penetration testing, which might not reveal more complex security issues. By adopting shift left principle, teams are able to detect and fix security flaws early on, save money that would otherwise be spent on a costly rework, and have a better chance of avoiding delays going into production. Integrating security into SDLC should look like weaving rather than stacking. There is no “security phase,” but rather a set of best practices and tools that should be included within the existing phases of the SDLC. A Secure SDLC requires adding security review and testing at each software development stage, from design, to development, to deployment and beyond. From initial planning to deployment and maintenance, embedding security practices ensures the creation of robust and resilient software. 


Making AI greener starts with smarter data center design

There’s been a lot of talk about the off-grid energy investments of hyperscalers. But the energy efficiency of AI infrastructure also has a big role to play. Nokia provides networking connectivity inside and between data centers, as well as between end users and data center applications. Understanding this intricate web is important as it’s not just about making the processes inside a data center faster and more efficient. It’s about making the entire journey between somebody making an AI request—and getting back a response—quick, secure, and more energy efficient. ... Energy, performance, and cost considerations may prompt some cloud providers to build their data centers in remote locations with access to clean energy, passive cooling, and cheaper and more plentiful real estate. However, data sovereignty laws, security concerns, and the ultra-low latency requirements of industrial applications may see a move toward more distributed cloud computing, with AI workloads moving closer to the end user. This would likely lead to more regional, metropolitan, and edge data centers, with some businesses and organizations opting for on-site data centers for mission-critical functions.
We may, in fact, see both trends at the same time. 


Employees Enter Sensitive Data Into GenAI Prompts Far Too Often

"Utilizing AI for the sake of using AI is destined to fail," said Kris Bondi, CEO and co-founder of Mimoto, in an emailed statement to Dark Reading. "Even if it gets fully implemented, if it isn't serving an established need, it will lose support when budgets are eventually cut or reappropriated." Though Kowski believes that not incorporating GenAI is risky, success can still be achieved, he notes. "Success without AI is still achievable if a company has a compelling value proposition and strong business model, particularly in sectors like engineering, agriculture, healthcare, or local services where non-AI solutions often have greater impact," he said. If organizations do want to pursue incorporating GenAI tools but want to mitigate the high risks that come along with it, the researchers at Harmonic have recommendations on how to best approach this. The first is to move beyond "block strategies" and implement effective AI governance, including deploying systems to track input into GenAI tools in real time, identifying what plans are in use and ensuring that employees are using paid plans for their work and not plans that use inputted data to train systems, gaining full visibility over these tools, sensitive data classification, creating and enforcing workflows, and training employees on best practices and risks of responsible GenAI use.


What is Blue Ocean Strategy? 3 Key Ways to Build a Business in an Uncontested Market

One of the biggest surprises in tackling a neglected market segment is realizing that your future customers might not even know they need you. They may sense a vague discomfort or carry a subconscious worry, but they haven't articulated the problem in a way that translates into action. In my field, most people didn't fully appreciate how complex certain end-of-life tasks could become — until they found themselves in the middle of a crisis they never prepared for. Simply presenting a solution and hoping people will connect the dots doesn't work when the underlying problem is hidden or poorly understood. Education became my most potent tool. ... Building momentum in a market with no clear precedent means learning to paddle in still waters. I needed to constantly fine-tune the product based on authentic customer feedback, invest the time and effort to educate potential users so they could recognize the value of what I was offering, and craft a holistic experience that viewed their challenges from multiple angles. These three strategies became the bedrock of my approach to Blue Ocean markets. 


Secure AI? Dream on, says AI red team

The first step in an AI red teaming operation is to determine which vulnerabilities to target, they said. They suggest: “starting from potential downstream impacts, rather than attack strategies, makes it more likely that an operation will produce useful findings tied to real world risks. After these impacts have been identified, red teams can work backwards and outline the various paths that an adversary could take to achieve them.” ... The two, authors said, are distinct yet “both useful and can even be complimentary. In particular, benchmarks make it easy to compare the performance of multiple models on a common dataset. AI red teaming requires much more human effort but can discover novel categories of harm and probe for contextualized risks.” ... The bottom line here: RAI harms are more ambiguous than security vulnerabilities and it all has to do with “fundamental differences between AI systems and traditional software.” Most AI safety research, the authors noted, focus on adversarial users who deliberately break guardrails, when in truth, they maintained, benign users who accidentally generate harmful content are as or more important.


New AI Architectures Could Revolutionize Large Language Models

For context, transformer architecture, the technology which gave ChatGPT the 'T' in its name, is designed for sequence-to-sequence tasks such as language modeling, translation, and image processing. Transformers rely on “attention mechanisms,” or tools to understand how important a concept is depending on a context, to model dependencies between input tokens, enabling them to process data in parallel rather than sequentially like so-called recurrent neural networks—the dominant technology in AI before transformers appeared. This technology gave models context understanding and marked a before and after moment in AI development. ... Google Research's Titans architecture takes a different approach to improving AI adaptability. Instead of modifying how models process information, Titans focuses on changing how they store and access it. The architecture introduces a neural long-term memory module that learns to memorize at test time, similar to how human memory works. ... Overall, the era of AI companies bragging over the sheer size of their models may soon be a relic of the past. If this new generation of neural networks gains traction, then future models won’t need to rely on massive scales to achieve greater versatility and performance.


How to Leverage Network Segmentation for Hospitality Sector PCI SSF Compliance

Network segmentation is the process of dividing a computer network into isolated segments or subnetworks, with each segment protected by security controls like firewalls and access restrictions. Specifically, each segment is separated by firewalls or other security measures, effectively restricting traffic flow between segments. Thus, this isolation helps contain potential security breaches, hence preventing them from spreading across the entire network. ... In the context of PCI SSF compliance, network segmentation can help hospitality businesses protect sensitive payment card data. It does so by limiting access to this data. By isolating the Cardholder Data Environment (CDE) from the rest of the network, organizations can reduce the scope of PCI SSF compliance. This also enhances their overall security posture. ... By isolating sensitive data, network segmentation reduces the risk of unauthorized access and data breaches. It creates multiple layers of defense, making it more difficult for attackers to reach critical systems. This approach also limits the lateral movement of threats, ensuring that a compromised system does not jeopardize the entire network.


Overcoming Key Challenges in an AI-Centric Future

Much has been made of AI and its potential dangers in the hands of attackers. It’s true—with the help of AI, launching an attack has never been easier, and it’s likely just a matter of time until we witness a significant AI-driven breach. That said, all is not lost. AI-specific security controls are already beginning to emerge, and as AI becomes more commonplace, newer and more advanced solutions will continue to emerge in the near future. ... Regulations almost always lag behind innovation, and AI is no exception. While a handful of AI regulations have begun to emerge around the world, most organizations are currently taking matters into their own hands by implementing dedicated AI polices to evaluate and control the AI services they use. Right now, those initiatives are focused primarily on maintaining data privacy and preventing AI from making critical errors. These AI safety standards will continue to evolve and will likely be integrated into existing security frameworks, including those put out by independent advisory bodies. Regulators will almost certainly maintain a strong focus on ethical considerations, creating guidelines that help define acceptable and responsible use cases for AI capabilities.



Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - January 15, 2025

Passkeys: they're not perfect but they're getting better

Users are largely unsure about the implications for their passkeys if they lose or break their device, as it seems their device holds the entire capability to authenticate. To trust passkeys as a replacement for the password, users need to be prepared and know what to do in the event of losing one – or all – of their devices. ... Passkeys are ‘long life’ because users can’t forget them or create one that is weak, so if they’re done well there should be no need to reset or update them. As a result, there’s an increased likelihood that at some point a user will want to move their passkeys to the Credential Manager of a different vendor or platform. This is currently challenging to do, but FIDO and vendors are actively working to address this issue and we wait to see support for this take hold across the market. ... For passkey-protected accounts, potential attackers are now more likely to focus on finding weaknesses in account recovery and reset requests – whether by email, phone or chat – and pivot to phishing for recovery keys. These processes need to be sufficiently hardened by providers to prevent trivial abuse by these attackers and to maintain the security benefits of using passkeys. Users also need to be educated on how to spot and report abuse of these processes before their accounts are compromised.


Securing Payment Software: How the PCI SSF Modular System Enhances Flexibility and Security

The framework was introduced to replace the aging Payment Application Data Security Standard (PA-DSS), which primarily focused on payment application security. As software development technologies and methodologies rapidly evolved, the need for a dynamic and adaptable security standard became increasingly apparent. Consequently, this realization prompted the creation of the PCI SSF. As a result, the PCI SSF encompasses a broader range of security requirements specifically tailored for modern software environments. ... The modular system of the PCI SSF is specifically designed to offer both flexibility and scalability, thereby enabling organizations to address their specific security needs based on their unique software environments. In addition, the modular approach allows organizations to select and implement only the components relevant to their software, which, in turn, simplifies the process of achieving and maintaining compliance. ... The PCI SSF’s modular system marks a transformative step in payment software security, effectively balancing adaptability with comprehensive protection against evolving cyber threats. Moreover, its flexible, scalable, and comprehensive approach allows organizations to tailor their security efforts to their unique needs, thereby ensuring robust protection for payment data.


The cloud cost wake-up call I predicted

Cloud computing starts as a flexible and budget-friendly option, especially with its enticing pay-per-use model. However, unchecked growth can turn this dream into a financial nightmare due to the complexities the cloud introduces. According to the Flexera State of the Cloud Report, 87% of organizations have adopted multicloud strategies, complicating cost management even more by scattering workloads and expenses across various platforms. The rise of cloud-native applications and microservices has further complicated cost management. These systems abstract physical resources, simplifying development but making costs harder to predict and control. Recent studies have revealed that 69% of CPU resources in container environments go unused, a direct contradiction of optimal cost management practices. Although open-source tools like Prometheus are excellent for tracking usage and spending, they often fall short as organizations scale. ... A critical component of effective cloud cost management is demystifying cloud pricing models. Providers often lay out their pricing structures in great detail, but translating them into actual costs can be difficult. A lack of understanding can lead to spiraling costs.


Using cognitive diversity for stronger, smarter cyber defense

Cognitive biases significantly influence decision-making during cybersecurity incidents by framing how individuals interpret information, assess risks, and respond to threats. ... Integrating cognitive science into cybersecurity tools involves understanding how human cognitive processes – such as perception, memory, decision-making, and problem-solving – affect security tasks. Designing user-friendly tools requires aligning cognitive models with diverse user behaviors while managing cognitive load, ensuring usability without compromising security, and adapting to the fast-changing cybersecurity landscape. Interfaces must cater to varying skill levels, promote awareness, and support effective decision-making, all while addressing ethical considerations like privacy and bias. Interdisciplinary collaboration between psychology, computer science, and cybersecurity experts is essential but challenging due to differences in expertise and communication styles. ... Cognitive diversity can frequently divert resources or distract from present, immediate or emerging threats. Focus on the things that are likely to happen. Implement defensive measures which require little resource while more complex measures are prioritized.


Next-gen Ethernet standards set to move forward in 2025

Beyond the big-ticket items of higher bandwidth and AI, a key activity in any year for Ethernet is interoperability testing for all manner of existing and emerging specifications. 200 Gigabits per second per lane is an important milestone on the path to an even higher bandwidth Ethernet specification that will exceed 1 Terabit per second. ... With 800GbE now firmly established, adoption and expansion into ever larger bandwidth will be a key theme in 2025. There will be no shortage of vendors offering 800 GbE equipment in 2025, but when it comes to Ethernet standards, focus will be on 1.6 Terabits/second Ethernet. “As 800GbE has come to market, the next speed for Ethernet is being talked about already,” Martin Hull, vice president and general manager for cloud and AI platforms at Arista Networks, told Network World. “1.6Tb Ethernet is being discussed in terms of the optics, the form factors and use cases, and we expect industry leaders to be trialing 1.6T systems towards the end of 2025.” ... “High-speed computing requires high bandwidth and reliable interconnect solutions,” Rodgers said. “However, high-speed also means high power and higher heat, placing more demands on the electrical grid and resources and creating a demand for new options.” That’s where LPOs will fit in.


Stop wasting money on ineffective threat intelligence: 5 mistakes to avoid

“CTI really needs to fall underneath your risk management and if you don’t have a risk management program you need to identify that (as a priority),” says Ken Dunham, cyber threat director for the Qualys Threat Research Unit. “It really should come down to: what are the core things you’re trying to protect? Where are your crown jewels or your high value assets?” Without risk management to set those priorities, organizations will not be able to appropriately set requirements for intelligence collection that will have them gather the kind of relevant sources that pertain to their most valuable assets. ... Bad intelligence can often be worse than none, leading to a lot of time wasted by analysts to validate and contextualize poor quality feeds. Even worse, if this work isn’t done appropriately, poor quality data could potentially even lead to misguided choices at the operational or strategic level. Security leaders should be tasking their intelligence team with regularly reviewing the usefulness of their sources based on a few key attributes. ... Even if CTI is doing an excellent job collecting the right kind of quality intelligence that its stakeholders are asking for, all that work can go for naught if it isn’t appropriately routed to the people that need it — in the format that makes sense for them.


Exposure Management: A Strategic Approach to Cyber Security Resource Constraint

XM is a proactive and integrated approach that provides a comprehensive view of potential attack surfaces and prioritises security actions based on an organisation’s specific context. It’s a process that combines cloud security posture, identity management, internal hosts, internet-facing hosts and threat intelligence into a unified framework, enabling security teams to anticipate potential attack vectors and fortify their defences effectively. Unlike traditional security measures, XM takes an “outside-in” approach, assessing how attackers might exploit vulnerabilities across interconnected systems. This shift in mindset is crucial for identifying and prioritising the most significant threats. By focusing on the most critical vulnerabilities and potential attack paths, XM allows security teams to allocate resources more efficiently and enhance their overall security posture. ... By providing a unified view of the entire attack path, XM improves an organisation’s ability to manage security risks. This unified view allows security teams to understand how vulnerabilities can be exploited and prioritise those that pose the greatest risk. Security teams are then able to guarantee efficient resource allocation and focus on threats with the most significant impact on business operations.


How GenAI is Exposing the Limits of Data Centre Infrastructure

Energy intensive Graphics Processing Units (GPUs) that power AI platforms require five to 10 times more energy than Central Processing Units (CPUs), because of the larger number of transistors. This is already impacting data centres. There are also new, cost-effective design methodologies incorporating features such as 3D silicon stacking, which allows GPU manufacturers to pack more components into a smaller footprint. This again increases the power density, meaning data centres need more energy, and create more heat. Another trend running in parallel is a steady fall in TCase (or Case Temperature) in the latest chips. TCase is the maximum safe temperature for the surface of chips such as GPUs. It is a limit set by the manufacturer to ensure the chip will run smoothly and not overheat, or require throttling which impacts performance. On newer chips, T Case is coming down from 90 to 100 degrees Celsius to 70 or 80 degrees, or even lower. This is further driving the demand for new ways to cool GPUs. As a result of these factors, air cooling is no longer doing the job when it comes to AI. It is not just the power of the components, but the density of those components in the data centre. Unless servers become three times bigger than they were before, efficient heat removal is needed. 


The Configuration Crisis and Developer Dependency on AI

As our IT infrastructure grows ever more modular, layered and interconnected, we deal with myriad configurable parts — each one governed by a dense thicket of settings. All of our computers — whether in our pockets, on our desks or in the cloud — have a bewildering labyrinth of components with settings to discover and fiddle with, both individually and in combination. ... A couple of strategies I’ve mentioned before bear repeating. One is the use of screenshots, which are now a powerful index in the corpus of synthesized knowledge. Like all forms of web software, the cloud platforms’ GUI consoles present a haphazard mix of UX idioms. A maneuver that is conceptually the same across platforms will often be expressed using very different affordances. ... A couple of strategies I’ve mentioned before bear repeating. One is the use of screenshots, which are now a powerful index in the corpus of synthesized knowledge. Like all forms of web software, the cloud platforms’ GUI consoles present a haphazard mix of UX idioms. A maneuver that is conceptually the same across platforms will often be expressed using very different affordances. AIs are pattern recognizers that can help us see and work with the common underlying patterns.


From project to product: Architecting the future of enterprise technology

Modern enterprise architecture requires thinking like an urban planner rather than a building inspector. This means creating environments that enable innovation while ensuring system integrity and sustainability. ... Just as urban planners need to develop a shared vocabulary with city officials, developers and citizens, enterprise architects must establish a common language that bridges technical and business domains. Complex ideas that remain purely verbal often get lost or misunderstood. Documentation and diagrams transform abstract discussions into something tangible. By articulating fitness functions — automated tests tied to specific quality attributes like reliability, security or performance — teams can visualize and measure system qualities that align with business goals. ... Technology governance alone will often just inform you of capability gaps, tech debt and duplication — this could be too late! Enterprise architects must shift their focus to business enablement. This is much more proactive in understanding the business objectives and planning and mapping the path for delivery. ... Just as cities must evolve while preserving their essential character, modern enterprise architecture requires built-in mechanisms for sustainable change. 



Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein