Showing posts with label XDR. Show all posts
Showing posts with label XDR. Show all posts

Daily Tech Digest - May 13, 2024

Why AI Won’t Take Over The World Anytime Soon

The majority of AI systems we encounter daily are examples of "narrow AI." These systems are masters of specialization, adept at tasks such as recommending your next movie on Netflix, optimizing your route to avoid traffic jams or even more complex feats like writing essays or generating images. Despite these capabilities, they operate under strict limitations, designed to excel in a particular arena but incapable of stepping beyond those boundaries. Even the generative AI tools that are dazzling us with their ability to create content across multiple modalities. They can draft essays, recognize elements in photographs, and even compose music. However, at their core, these advanced AIs are still just making mathematical predictions based on vast datasets; they do not truly "understand" the content they generate or the world around them. Narrow AI operates within a predefined framework of variables and outcomes. It cannot think for itself, learn beyond what it has been programmed to do, or develop any form of intention. Thus, despite the seeming intelligence of these systems, their capabilities remain tightly confined. 


Establishing a security baseline for open source projects

Transparency is in the spirit of open-source, and enhancing it within the community is a key goal of our organization. Currently, every OpenSSF project is required to have a security policy that provides clear directions on how vulnerabilities should be reported and how they will be responded to. The security baseline also requires that. The OpenSSF Best Practice Badge program and Scorecard report if a project has a vulnerability disclosure policy. The badge program passing level has been used by other Linux Foundation open-source projects as a criteria to become generally available. Open-source communities have been pushing the boundaries on SBOM to increase transparency in both open-source and closed source software. However, there have been challenges with SBOM consumption due to data quality and interoperability issues. Recently, OpenSSF, along with CISA and DHS S&T, took steps to address this challenge by releasing Protobom, an open-source software supply chain tool that enables all organizations, including system administrators and software development communities, to read and generate SBOMs and file data, as well as translate this data across standard industry SBOM formats.


Overcoming Resistance to DevOps Adoption

A main challenge in transitioning from traditional software development approaches is establishing a DevOps culture. For years, development teams have worked in siloes, leading to bureaucracy and departmental barriers that hindered agility and collaboration. These teams are required to learn new tools and processes as part of adopting agile development methodologies, creating a cultural shift and resistance to change. Most practitioners have cited cultural change as a barrier to DevOps adoption. Soumik Mukherjee, senior manager, platform engineering (global), Ascendion, said he confronted these challenges by starting small with manageable projects, celebrating early wins to build momentum, and fostering open communication and collaboration across teams. "We invest in upskilling our employees and continuously track progress to identify and address any bottlenecks. By breaking down silos and building a shared understanding, we create a collaborative environment where teams work together efficiently and effectively," Mukherjee said. Debashis Singh, CIO at Persistent Systems, said "Fostering a DevOps and DevSecOps culture and establishing a clear vision is akin to setting the North Star for everyone in the organization.


Charting India’s AI trajectory: Insights from World Economic Forum

The overarching theme for this year’s WEF focused on “Rebuilding Trust”, though the topic extends beyond fighting corruption in public institutions. Trust is critical to AI. Without trust in AI and its outputs, our goal of transforming economies with AI will be hard to achieve. The foundation of this trust starts with high-fidelity, trusted, and secure input data. We must center security and compliance when developing AI applications to combat these concerns. Adoption will naturally accelerate when leaders can trust that AI applications are secure and compliant. No company can risk missing out on the productivity gains that AI offers. As Sam Altman said at Davos, “[GenAI] will change the world much less than we all think and it will change jobs much less than we all think. We will all operate at a… higher level of abstraction… [and] have access to a lot more capability.” India’s AI journey and progress took the spotlight at WEF and was center stage at various bustling technology discussions. The country’s unwavering commitment to driving innovation and fostering growth is evident in the many success stories and examples shared at the forum. 


Don’t overlook the impact of AI on data management

While many organizations already understand the power of having clean data and clean ways of inputting that data, many fail to grasp that tools ready to help them with this process already exist and are already doing wonders for peers in your industry. One emerging tool for inputting data, that may surprise, is generative AI chatbots. With the advent of gen AI, a new breed of chatbots has emerged — ones that can conduct high-level conversations, resembling human interactions more closely than ever before. Not only can they understand customer queries, but they can input and collect data directly with business systems, efficiently handling forms and personalizing client profiles. Integrating such AI-driven chatbots isn’t just about cutting costs — it’s about revolutionizing customer engagement and driving new insights from every interaction. If the first step is automating data capture, chatbots can directly collect and process data from customers without human intervention. The chatbots can not only collect the data but they can also use it for cross selling. 


Linux backdoor threat is a wake-up call for IoT

This hack should serve as a wake-up call that not every device warrants Linux. Basic devices like sensors or monitors – and, yes, even doorbells – usually serve one function at a time. They can therefore benefit from the resource efficiency and focused functionality of RTOS. In Linux and other general-purpose operating systems, programs are loaded dynamically after boot, often with the ability to run in separate memory and file spaces under different user accounts. This isolation is beneficial when running multiple applications concurrently on a shared server, as one user’s programs cannot interfere with another’s, and hardware access is shared equally through the operating system. In contrast, RTOS operates by compiling applications and tasks directly into the system with minimal separation between memory spaces and hardware. Since the primary goal of an IoT device is typically to serve a single application, possibly divided into multiple tasks, this lack of separation is not an issue. Additionally, because the application is compiled into the RTOS, it is ready to run after a very short boot and initialization process.


AI At The Edge Is Different From AI In The Data Center

In manufacturing, locally run AI models can rapidly interpret data from sensors and cameras to perform vital tasks. For example, automakers scan their assembly lines using computer vision to identify potential defects in a vehicle before it leaves the plant. In a use case like this, very low latency and always-on requirements make data movement throughout an extensive network impractical. Even small amounts of lag can impede quality assurance processes. On the other hand, low-power devices are ill-equipped to handle beefy AI workloads, such as training the models that computer vision systems rely on. Therefore, a holistic, edge-to-cloud approach combines the best of both worlds. Backend cloud instances provide the scalability and processing power for complex AI workloads, and front-end devices put data and analysis physically close together to minimize latency. For these reasons, cloud solutions, including those from Amazon, Google, and Microsoft, play a vital role. Flexible and performant instances with purpose-built CPUs, like the Intel Xeon processor family with built-in AI acceleration features, can tackle the heavy lifting for tasks like model creation.


Ask a Data Ethicist: What Happens When Language Becomes Data?

Natural language processing involves turning language into formats a machine can understand (numbers), before turning it back into our desired human output (text, code, etc). One of the first steps in the process of “datafying” language is to break it down into tokens. Tokens are typically a single word, at least in English – more on that in a minute. ... Tokens are important because they not only drive performance of the model they also drive training costs. AI companies charge developers by the token. English tends to be the most token-efficient language, making it economically advantageous to train on English language “data” versus, say, Burmese. This blog post by data scientist Yennie Jun goes into further details about how the process works in a very accessible way, and this tool she built allows you to select different languages along with different tokenizers to see exactly how many tokens are needed for each of the languages selected. NLP training techniques used in LLMs privilege the English language when it turns it into data for training, and penalize other languages, particularly low-resource languages. 


AI-powered XDR: The Answer to Network Outages and Security Threats

To overcome the limitations of standard XDR, organizations can choose XDR capabilities integrated within a SASE architecture. SASE consolidates all networking and security functions into a cohesive whole with single-pane-of-glass visibility. SASE-based, next-gen XDR can leverage SASE’s telemetry to inform an organization’s incident detection and response workflows. By leveraging native sensors, like NGFW, advanced threat prevention, SWG, and ZTNA (zero trust network architecture), that feed data into a unified data lake, SASE eliminates the need for data integration and normalization. It allows XDR to analyze raw data, which eliminates inaccuracies and gaps. ... AI and machine learning play a pivotal role in XDR capabilities. Advanced algorithms trained on vast amounts of data enable more accurate incident detection and correlation. However, only comprehensive, consistent, and high-quality data and events can train AI/ML algorithms to create quality XDR incidents and perform root-cause analysis. SASE converges petabytes of data from various native sensors into a single data lake for training advanced AI/ML models.


Is an AI Bubble Inevitable?

Forward-looking enterprise AI adopters are already hedging their bets by ensuring they have interpretable AI and traditional analytics on hand while they explore newer AI technologies with appropriate caution, Zoldi says. He notes that many financial services organizations have already pulled back from using GenAI, both internally and for customer-facing applications. "The fact that ChatGPT, for example, doesn't give the same answer twice is a big roadblock for banks, which operate on the principle of consistency." ... In the event of a market drawback, AI customers may revert to less sophisticated approaches instead of reevaluating their AI strategies, Amorim warns. "This could result in a setback for businesses that have invested heavily in AI, since they may be less inclined to explore its full potential or adapt to changing market dynamics." Just as the dot-com failure didn't permanently destroy the web, an AI industry collapse won't mark the end of AI. Zoldi believes there will eventually be a return to normal. "Companies that had a mature, responsible AI practice will come back to investing in continuing that journey," he notes.
 


Quote for the day:

"Without continual growth and progress, such words as improvement, achievement, and success have no meaning." -- Benjamin Franklin

Daily Tech Digest - September 21, 2021

Cybersecurity Priorities in 2021: How Can CISOs Re-Analyze and Shift Focus?

The level of sophistication of attacks has increased manifold in the past couple of years. Attackers leveraging advanced technology to infiltrate company networks and gain access to mission-critical assets. Given this scenario, organizations too need to leverage futuristic technology such as next-gen WAF, intelligent automation, behavior analytics, deep learning, security analytics, and so on to prevent even the most complex and sophisticated attacks. Automation also enables organizations to gain speed and scalability in the broader IT environment with ramped-up attack activity. Security solutions like Indusface's AppTrana enable all this and more. ... Remote work is here to stay, and the concept of the network perimeter is blurring. For business continuity, organizations have to enable access of mission-critical assets to employees wherever they are. Employees are probably accessing these resources from personal, shared devices and unsecured networks. CISOs need to think strategically and implement borderless security based on a zero-trust architecture.


Benefits of cloud computing: The pros and cons

Cloud computing management raises many information systems management issues that include ethical (security, availability, confidentiality, and privacy) issues, legal and jurisdictional issues, data lock-in, lack of standardized service level agreements (SLAs), and customisation technological bottlenecks, and others. Sharing a cloud provider has some associated risks. The most common cloud security issues include unauthorized access through improper access controls and the misuse of employee credentials. According to industry surveys, unauthorized access and insecure APIs are tied for the No. 1 spot as the single biggest perceived security vulnerability in the cloud. Others include internet protocol vulnerabilities, data recovery vulnerability, metering, billing evasion, vendor security risks, compliance and legal risks, and availability risks. When you store files and data in someone else's server, you're trusting the provider with your crown jewels. Whether in a cloud or on a private server, data loss refers to the unwanted removal of sensitive information, either due to an information system error or theft by cybercriminals. 


Progressing from a beginner to intermediate developer

In all your programming, you should aim to have a single source of truth for everything. This is the core idea behind DRY - Don't Repeat Yourself - programming. In order to not repeat yourself, you need to define everything only once. This plays out in different ways depending on the context. In CSS, you want to store all the values that appear time and time again in variables. Colors, fonts, max-widths, even spacing such as padding or margins are all properties that tend to be consistent across an entire project. You can often define variables for a stylesheet based on the brand guidelines, if you have access. Otherwise it's a good idea to go through the site designs and define your variables before starting. In JavaScript, every function you write should only appear once. If you need to reuse it in a different place, isolate it from the context you're working in by putting it into it's own file. You'll often see a util folder in JavaScript file structures - generally this is where you'll find more generic functions used across the app. Variables can also be sources of truth. 


SRE vs. DevOps: What are the Differences?

Site Reliability Engineering, or SRE, is a strategy that uses principles rooted in software engineering to make systems as reliable as possible. In this respect, SRE, which was made popular by Google starting in the mid-2000s, facilitates a shared mindset and shared tooling between software development and IT operations. Instead of writing software using one set of strategies and tools, then managing it using an entirely different set, SRE helps to integrate each practice together by orienting both around concepts rooted in software engineering. Meanwhile, DevOps is a philosophy that, at its core, encourages developers and IT operations teams to work closely together. The driving idea behind DevOps is that when developers have visibility into the problems IT operations teams experience in production, and IT operations teams have visibility into what developers are building as they push new application releases down the development pipeline, the end result is greater efficiency and fewer problems for everyone.


Distributed transaction patterns for microservices compared

The technical requirements for two-phase commit are that you need a distributed transaction manager such as Narayana and a reliable storage layer for the transaction logs. You also need DTP XA-compatible data sources with associated XA drivers that are capable of participating in distributed transactions, such as RDBMS, message brokers, and caches. If you are lucky to have the right data sources but run in a dynamic environment, such as Kubernetes, you also need an operator-like mechanism to ensure there is only a single instance of the distributed transaction manager. The transaction manager must be highly available and must always have access to the transaction log. For implementation, you could explore a Snowdrop Recovery Controller that uses the Kubernetes StatefulSet pattern for singleton purposes and persistent volumes to store transaction logs. In this category, I also include specifications such as Web Services Atomic Transaction (WS-AtomicTransaction) for SOAP web services. 


5 observations about XDR

Today’s threat detection solutions use a combination of signatures, heuristics, and machine learning for anomaly detection. The problem is that they do this on a tactical basis by focusing on endpoints, networks, or cloud workloads alone. XDR solutions will include these tried-and-true detection methods, only in a more correlated way on layers of control points across hybrid IT. XDR will go further than existing solutions with new uses of artificial intelligence and machine learning (AI/ML). Think “nested algorithms” a la Russian dolls where there are layered algorithms to analyze aberrant behavior across endpoints, networks, clouds, and threat intelligence. Oh, and it kind of doesn’t matter which security telemetry sources XDR vendors use to build these nested algorithms, as long as they produce accurate high-fidelity alerts. This means that some vendors will anchor XDR to endpoint data, some to network data, some to logs, and so on. To be clear, this won’t be easy: Many vendors won’t have the engineering chops to pull this off, leading to some XDR solutions that produce a cacophony of false positive alerts.


Why quantum computing is a security threat and how to defend against it

First, public key cryptography was not designed for a hyper-connected world, it wasn't designed for an Internet of Things, it's unsuitable for the nature of the world that we're building. The need to constantly refer to certification providers for authentication or verification is fundamentally unsuitable. And of course the mathematical primitives at the heart of that are definitely compromised by quantum attacks so you have a system which is crumbling and is certainly dead in a few years time. A lot of the attacks we've seen result from certifications being compromised, certificates expiring, certificates being stolen and abused. But with the sort of computational power available from a quantum computer blockchain is also at risk. If you make a signature bigger to guard against it being cracked the block size becomes huge and the whole blockchain grinds to a halt. Think of the data centers as buckets, three times a day the satellites throw some random numbers into the buckets and all data centers end up with an identical bucket full of identical sets of random information. 


Government data management for the digital age

Despite the complexity and lengthy time horizon of a holistic effort to modernize the data landscape, governments can establish and sustain a focus on rapid, tangible impact. A failure to deliver results from the outset can undermine stakeholder support. In addition, implementing use cases early on helps governments identify gaps in their data landscapes (for example, useful information that is not stored in any register) and missing functionalities in the central data-exchange infrastructure. To deliver impact quickly, governments may deploy “data labs”—agile implementation units with cross-functional expertise that focus on specific use cases. Solutions are rapidly developed, tested, iterated and, once successful, rolled out at scale. The German government is pursuing this approach in its effort to modernize key registers and capture more value. ... Organizations such as Estonia’s Information System Authority or Singapore’s Government Data Office have played a critical role in transforming the data landscape of their respective countries. 


Abductive inference: The blind spot of artificial intelligence

AI researchers base their systems on two types of inference machines: deductive and inductive. Deductive inference uses prior knowledge to reason about the world. This is the basis of symbolic artificial intelligence, the main focus of researchers in the early decades of AI. Engineers create symbolic systems by endowing them with a predefined set of rules and facts, and the AI uses this knowledge to reason about the data it receives. Inductive inference, which has gained more traction among AI researchers and tech companies in the past decade, is the acquisition of knowledge through experience. Machine learning algorithms are inductive inference engines. An ML model trained on relevant examples will find patterns that map inputs to outputs. In recent years, AI researchers have used machine learning, big data, and advanced processors to train models on tasks that were beyond the capacity of symbolic systems. A third type of reasoning, abductive inference, was first introduced by American scientist Charles Sanders Peirce in the 19th century. 


Software Engineering is a Loser’s Game

Nothing is more frustrating as a code reviewer than reviewing someone else’s code who clearly didn’t do these checks themselves. It wastes the code reviewer’s time when he has to catch simple mistakes like commented out code, bad formatting, failing unit tests, or broken functionality in the code. All of these mistakes can easily be caught by the code author or by a CI pipeline. When merge requests are frequently full of errors, it turns the code review process into a gatekeeping process in which a handful of more senior engineers serve as the gatekeepers. This is an unfavorable scenario that creates bottlenecks and slows down the team’s velocity. It also detracts from the higher purpose of code reviews, which is knowledge sharing. We can use checklists and merge request templates to serve as reminders to ourselves of things to double check. Have you reviewed your own code? Have you written unit tests? Have you updated any documentation as needed? For frontend code, have you validated your changes in each browser your company supports? 



Quote for the day:

"Effective leadership is not about making speeches or being liked; leadership is defined by results not attributes." -- Peter Drucker