Daily Tech Digest - February 06, 2024

Championing privacy-first security: Harmonizing privacy and security compliance

When security solutions are crafted with privacy as a central consideration, organisations can deploy robust security measures while safeguarding the personal data of their customers and employees. A comprehensive cost-benefit analysis reveals significant advantages in adopting a privacy-first approach to security. For instance, proactively blocking malware before it infiltrates an organisation’s systems can avert a potential data breach. Given the average cost of US$4.45 million in 2023, coupled with the consequential impact on brand reputation and legal ramifications, preventing even a single data breach becomes paramount for any company. Hence, the importance of industry-leading security measures is indisputable. Any reputable security company should provide solutions that limit its access to sensitive data and ensure the protection of the personal data entrusted to its care. ... A privacy-first security program assesses the risks associated with both implementing and not implementing security measures. If the advantages of deploying a security solution, such as email scanning, outweigh the drawbacks – which is highly probable – the organisation should proceed with the careful implementation of this capability.

Far Memory Unleashed: What Is Far Memory?

Far memory is a memory tier between DRAM and Flash that has a lower cost per GB than DRAM and a higher performance than Flash. Far memory works by disaggregating memory and allowing nodes or machines to access the memory of a remote node/machine via compute express link. Memory is the most contested and least elastic resource in a data center. Currently, servers can only use local memory, which may be scarce on the local system but abundant on other underutilized servers. With far memory, local machines can use remote machine’s memory. By introducing far memory into the memory tier and moving less frequently accessed data to far memory, the system can perform efficiently with low DRAM and reduce the total cost of ownership. Far memory uses a remote machine’s memory as a swap device, either by using idle machines or by building memory appliances that only serve to provide a pool of memory shared by many servers. This approach optimizes memory usage and reduces over-provisioning. However, far memory also has its own challenges. Swapping out memory pages to remote machines increases the failure domain of each machine, which can lead to a catastrophic failure of the entire cluster. 

Four reasons your agency's security infrastructure isn't agile enough

There are four key considerations for integrating security architecture effectively in an agile environment - Cross-Functional Collaboration: Security experts must actively engage with developers, testers, and product owners. Collaborating with experts helps create a shared understanding of security requirements and facilitates quick resolution of security-related issues. Embedding security professionals within Agile teams can enhance real-time collaboration and ensure consistent security controls. Security Training and Awareness: Given the rapid pace of an Agile sprint, all team members should be equipped with the knowledge to write secure code. ... Foster a Security Culture: Foster a culture where security is seen as everyone's responsibility, not just the security team's. Adapt the organizational mindset to value security equally with other business objectives. ... Security Champions within Agile Teams: Identify and nurture 'Security Champions' within each Agile team. These individuals with a keen interest in security act as a bridge between the security team and their respective agile teams. They help promote security best practices, ensuring security is not overlooked amidst other technical considerations.

AI Legislation: Enterprise Architecture Guide to Compliance

Artificial intelligence (AI) tools are so easy to leverage that they can be used by anyone within your organization without technical support. This means that you need to keep a careful eye on, not just the authorized applications you leverage, but what AI tools your colleagues could be using without authorization. In leveraging AI tools to generate content for your organization, your employees could unwittingly input private data into the public instance of ChatGPT. Not only does this share that data with ChatGPT's vendor, OpenAI, but it actually trains ChatGPT on that content, meaning the AI tool could potentially output that information to another user outside of your organization. Alternatively, overuse of generative AI tools without proper supervision could lead to factual or textual errors being published to your customers. Gen AI tools need careful supervision to ensure they don't "hallucinate" or produce mistakes, as they are unable to self-edit. It's equally important to be able to report back to legislators on what AI is being used across your company, so they can see you're compliant. This will likely become a regulatory requirement in the near future.

Choosing a disaster recovery site

The first option is to set up your own secondary DR data center in a different location from your primary site. Many large enterprises go this route; they build out DR infrastructure that mirrors what they have in production so that, at least in theory, it can take over instantly. The appeal here lies in control. Since you own and operate the hardware, you dictate compatibility, capacity, security controls and every other aspect. You’re not relying on any third party. The downside of course, lies in cost. All of that redundant infrastructure sitting idle doesn’t come cheap. ... The second approach is to engage an external DR service provider to furnish and manage a recovery site on your behalf. Companies like SunGard built their business around this model. The appeal lies in offloading responsibility. Rather than build out your own infrastructure, you essentially reserve DR data center capacity with the provider. ... The third option for housing your DR infrastructure is leveraging the public cloud. Market leaders like AWS and Azure offer seemingly limitless capacity that can scale to meet even huge demands when disaster strikes. 

How CISOs navigate policies and access across enterprises

Simply speaking, if existing network controls are now being moved to the cloud, the scope of technical controls does not drastically differ from legacy approaches. The technology, however, has massively evolved towards platform-centric controls, and that for a good reason. Isolated controls cause complexity, and if you are moving your perimeter to a hyperscaler, both your users and their devices will no longer be managed by the corporate on-prem security controls either. A good CASB to broker between user and data is key, as is identity and access management. What’s now new is workload protection requirements รก la CSAP technology. In addition to increasing sophistication and the number of security threats and successful breaches, most enterprises further increase risk by “rouge IT” teams leveraging cloud environments without the awareness and management by security teams. Cloud deployments are typically deployed faster and with less planning and oversight than data center or on-site environment deployments. Cloud security tools should be an extension of your other premise-based tools for ease of management, consistency of policy enforcement and cost savings due to additional purchase commitments, training, and certification non-duplicity. 

What to Know About Machine Customers

In the realm of customer service and support, machine customers are like virtual assistants or smart devices (think of Siri or Alexa) that carry out customer service tasks on behalf of actual human customers. Alok Kulkarni, CEO and Co-founder of Cyara, says the emergence of machine customers introduces a new dynamic, requiring organizations to adapt their existing support strategies. “This might include developing specific interfaces and communication channels tailored for interactions with machine customers,” he explains in an email interview. Organizations must create additional self-service options specifically designed for machine customers. “Unlike traditional customer support approaches, catering to machine customers requires a nuanced understanding of their specific needs and operational dynamics,” Kulkarni explains. This means designing self-service interfaces that are not only user-friendly for machines but also align with the intricacies of autonomous negotiation and purchasing processes. These interfaces should empower machine customers to navigate through various stages of transactions autonomously, from product selection to payment processing, ensuring a streamlined and frictionless experience.

Google: Govs Drive Sharp Growth of Commercial Spyware Cos

Much of the concern has to do with the explosion in the availability of tools and services that allow governments and law enforcement to break into target devices with impunity, harvest information from them, and spy unchecked on victims. The vendors selling these tools — most of which are designed for mobile devices — have often openly pitched their wares as legitimate tools that aid in law enforcement and counter-terrorism efforts. But the reality is that repressive governments have routinely used spyware tools against journalists, activists, dissidents, and opposition party politicians, said Google. The company's report cites three instances of such misuse: one that targeted a human rights defender working with a Mexico-based rights organization; another against an exiled Russian journalist; and the third against the co-founder and director of a Salvadorian investigative news outlet. The researcher attributes much of the recent growth in the CSV market to strong demand from governments around the world to outsource their need for spyware tools rather than have an advanced persistent threat build them in-house. 

How To Build Autonomous Agents – The End-Goal for Generative AI

From a technology perspective, there are five elements that go into autonomous agent designs: the agent itself, for processing; tools, for interaction; prompt recipes, for prompting and planning; memory and context, for training and storing data; and APIs / user interfaces, for interaction. The agent at the center of this infrastructure leverages one or more LLMs and the integrations with other services. You can build this integration framework yourself, or you can bring in one of the existing orchestration frameworks that have been created, such as LangChain or LlamaIndex. The framework should provide the low-level foundational model APIs that your service will support. It connects your agent to the resources that you will use as part of your overall agent, including everything from existing databases and external APIs, to other elements over time. It also has to take into account what use cases you intend to deliver with your agent, from chatbots to more complex autonomous tasks. Existing orchestration frameworks can take care of a lot of the heavy lifting involved in managing LLMs, which makes it much easier and faster to build applications or services that use GenAI.

How Platform and Site Reliability Engineering Are Evolving DevOps

Actually, failure should not just be OK but welcome. Most organizations are averse to failure, but it’s only through our failures in these spaces that we can learn and grow and figure out how to best position, leverage, and continue to imagine the roles of DevOps, platform engineers, and SRE. I’ve seen this play out in large companies that went all in on DevOps and then realized that they needed a team focused on breaking down any barriers that presented themselves to developers. At scale, DevOps - even with the tools provided by the internally focused platform engineering team - didn’t really cut it. These companies then integrated the SRE function, which filled DevOps’ reliability and scalability gaps. That worked until these companies realized that they were reinventing the wheel - dozens of times. Different engineering teams within the organization were doing things just differently enough - different setups, different processes, different expectations - that they needed separate setups to put out a service. The SREs were seeing all of this after the fact, which led them to circle back to the realization that different teams needed to be using the same development building blocks. Frustrating? Yes. The cost of increasing efficiency in the future? Absolutely.

Quote for the day:

“It’s better to look ahead and prepare, than to look back and regret.” -- Jackie Joyner-Kersee

No comments:

Post a Comment