Showing posts with label ramsomware. Show all posts
Showing posts with label ramsomware. Show all posts

Daily Tech Digest - January 26, 2024

Why a Chief Cyber Resilience Officer is Essential in 2024

“We'll see the role popping up more and more as an operational outcome within security programs and more of a focus in business. In the wake of the pandemic and macroeconomic conditions and everything, what business leader isn’t thinking about business resilience? So, cyber resilience tucks nicely into that.” On the surface, the standalone CISO role isn’t much different because it serves as the linchpin for securing the enterprise. There are many different flavors of CISO, with some being business-focused, says Hopkins, whose teams take on more compliance tasks as opposed to more technical security operations. Other CISOs are more technical, meaning they’ll monitor threats in the environment and respond accordingly, while compliance is a separate function. However, the stark differences between the two roles lie in the mindset, approach, and target outcome for the scenario. The CCRO’s mindset is “it’s not a matter of if, but when.” So, the CCRO’s approach is to anticipate cyber incidents and make incident response preparations that will mitigate material damage to a business. They act as a lifeline. This approach is arguably the role’s most quintessential attribute. 


How To Sell Enterprise Architecture To The Business

The best way to win buy-in for your enterprise architecture (EA) practice is to know who your stakeholders are and which of them will be the most receptive to your ideas. EA has a broad scope that impacts your entire business strategy beyond just your application portfolio, so you need to adapt your presentations to your audience. Defining the specific parts of your EA practice that matter to each stakeholder will keep your discussion relevant and impactful. Put your processes in the context of the stakeholder's business area and show the immediate value you will create and the structure that you have in place to do so. You can even offer to help install EA processes into other teams' workflows to help improve synergy with their toolsets. Just ensure that you highlight the benefits for them. Explaining to your marketing team how you plan to optimize your organization's finance software is not going to engage them. However, showcasing the information you have on your content management systems and MQL trackers will catch their interest. Once a group of key stakeholders are on-board with your EA practice, you will have a group of EA evangelists and a selection of case studies that you can use to win over more and more stakeholders. 


Quantum Breakthrough: Unveiling the Mysteries of Electron Tunneling

Tunneling is a fundamental process in quantum mechanics, involving the ability of a wave packet to cross an energy barrier that would be impossible to overcome by classical means. At the atomic level, this tunneling phenomenon significantly influences molecular biology. It aids in speeding up enzyme reactions, causes spontaneous DNA mutations, and initiates the sequences of events that lead to the sense of smell. Photoelectron tunneling is a key process in light-induced chemical reactions, charge and energy transfer, and radiation emission. The size of optoelectronic chips and other devices has been close to the sub-nanometer atomic scale, and the quantum tunneling effects between different channels would be significantly enhanced. ... This work successfully reveals the critical role of neighboring atoms in electron tunneling in sub-nanometer complex systems. This discovery provides a new way to deeply understand the key role of the Coulomb effect under the potential barrier in the electron tunneling dynamics, solid high harmonics generation, and lays a solid research foundation for probing and controlling the tunneling dynamics of complex biomolecules.


UK Intelligence Fears AI Will Fuel Ransomware, Exacerbate Cybercrime

“AI will primarily offer threat actors capability uplift in social engineering,” the NCSC said. “Generative AI (GenAI) can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing. This will highly likely increase over the next two years as models evolve and uptake increases.” The other worry deals with hackers using today’s AI models to quickly sift through the gigabytes or even terabytes of data they loot from a target. For a human it could take weeks to analyze the information, but an Al model could be programmed to quickly pluck out important details within minutes to help hackers launch new attacks or schemes against victims. ... Despite the potential risks, the NCSC's report did find one positive: “The impact of AI on the cyber threat will be offset by the use of AI to enhance cyber security resilience through detection and improved security by design.” So it’s possible the cybersecurity industry could develop AI smart enough to counter next-generation attacks. But time will tell. Meanwhile, other cybersecurity firms including Kaspersky say they've also spotted cybercriminals "exploring" using AI programs. 


Machine learning for Java developers: Algorithms for machine learning

In supervised learning, a machine learning algorithm is trained to correctly respond to questions related to feature vectors. To train an algorithm, the machine is fed a set of feature vectors and an associated label. Labels are typically provided by a human annotator and represent the right answer to a given question. The learning algorithm analyzes feature vectors and their correct labels to find internal structures and relationships between them. Thus, the machine learns to correctly respond to queries. ... In unsupervised learning, the algorithm is programmed to predict answers without human labeling, or even questions. Rather than predetermine labels or what the results should be, unsupervised learning harnesses massive data sets and processing power to discover previously unknown correlations. In consumer product marketing, for instance, unsupervised learning could be used to identify hidden relationships or consumer grouping, eventually leading to new or improved marketing strategies. ... The challenge of machine learning is to define a target function that will work as accurately as possible for unknown, unseen data instances. 


How to protect your data privacy: A digital media expert provides steps you can take and explains why you can’t go it alone

The dangers you face online take very different forms, and they require different kinds of responses. The kind of threat you hear about most in the news is the straightforwardly criminal sort of hackers and scammers. The perpetrators typically want to steal victims’ identities or money, or both. These attacks take advantage of varying legal and cultural norms around the world. Businesses and governments often offer to defend people from these kinds of threats, without mentioning that they can pose threats of their own. A second kind of threat comes from businesses that lurk in the cracks of the online economy. Lax protections allow them to scoop up vast quantities of data about people and sell it to abusive advertisers, police forces and others willing to pay. Private data brokers most people have never heard of gather data from apps, transactions and more, and they sell what they learn about you without needing your approval. A third kind of threat comes from established institutions themselves, such as the large tech companies and government agencies. These institutions promise a kind of safety if people trust them – protection from everyone but themselves, as they liberally collect your data.


Pwn2Own 2024: Tesla Hacks, Dozens of Zero-Days in Electrical Vehicles

"The attack surface of the car it's growing, and it's getting more and more interesting, because manufacturers are adding wireless connectivities, and applications that allow you to access the car remotely over the Internet," Feil says. Ken Tindell, chief technology officer of Canis Automotive Labs, seconds the point. "What is really interesting is how so much reuse of mainstream computing in cars brings along all the security problems of mainstream computing into cars." "Cars have had this two worlds thing for at least 20 years," he explains. First, "you've got mainstream computing (done not very well) in the infotainment system. We've had this in cars for a while, and it's been the source of a huge number of vulnerabilities — in Bluetooth, Wi-Fi, and so on. And then you've got the control electronics, and the two are very separate domains. Of course, you get problems when that infotainment then starts to touch the CAN bus that's talking to the brakes, headlights, and stuff like that." It's a conundrum that should be familiar to OT practitioners: managing IT equipment alongside safety-critical machinery, in such a way that the two can work together without spreading the former's nuisances to the latter. 


Does AI give InfiniBand a moment to shine? Or will Ethernet hold the line?

Ethernet’s strengths include its openness and its ability to do a more than decent job for most workloads, a factor appreciated by cloud providers and hyperscalers who either don't want to manage a dual-stack network or become dependent on the small pool of InfiniBand vendors. Nvidia's SpectrumX portfolio uses a combination of Nvidia's 51.2 Tb/s Spectrum-4 Ethernet switches and BlueField-3 SuperNICs to provide InfiniBand-like network performance, reliability, and latencies using 400 Gb/s RDMA over converged Ethernet (ROCE). Broadcom has made similar claims across its Tomahawk and Jericho switch line, which use either data processing units to manage congestion or handling this in the top of rack switch with its Jericho3-AI platform, announced last year. To Broadcom's point, hyperscalers and cloud providers such like AWS have done just that, Boujelbene said. The analyst noted that what Nvidia has done with SpectrumX is compress this work into a platform that makes it easier to achieve low-loss Ethernet. And while Microsoft has favored InfiniBand for its AI cloud infrastructure, AWS is taking advantage of improving congestion management techniques in its own Elastic Fabric Adapter 2 (EFA2) network


The Evolution & Outlook of the Chief Information Security Officer

Beyond mere implementation, the CISO also carries the mantle of education, nurturing a cybersecurity-conscious environment by making every employee cognizant of potential cyber threats and effective preventive measures. As the digital landscape shifts beneath our feet, the roles and responsibilities of the CISO have significantly evolved, casting a larger shadow over the organization’s operations and extending far beyond the traditional confines of IT risk management. No longer confined to the realms of technology alone, the CISO has become an integral component of the broader business matrix. They stand at the intersection of business and technology, needing to balance the demands of both spheres in order to effectively steer the organization towards a secure digital future. ... The increasingly digitalized and interconnected world of today has thrust the role of the Chief Information Security Officer (CISO) into the limelight. Their duties have become crucial as organizations navigate a complex and ever-evolving cybersecurity landscape. Customer data protection, adherence to intricate regulations, and ensuring seamless business operations in the face of potential cyber threats are prime priorities that necessitate the presence of a CISO. 


To Address Security Data Challenges, Decouple Your Data

Why is this a good thing? It can ultimately help you gain a holistic perspective of all the security tools you have in your organization to ensure you’re leveraging the intrinsic value of each one. Most organizations have dozens of security tools, if not more, but most lack a solid understanding or mapping of what data should go into the SIEM solution, what should come out, and what data is used for security analytics, compliance, or reporting. As data becomes more complex, extracting value and aggregating insights become more difficult. When you decide to decouple the data from the SIEM system, you have an opportunity to evaluate your data. As you move towards an integrated data layer where disparate data is consolidated, you can clean, deduplicate, and enrich it. Then you have the chance to merge that data not only with other security data but with enterprise IT and business data, too. Decoupling the data into a layer where disparate data is woven together and normalized for multidomain data use cases allows your organization to easily take HR data, organizational data, and business logic and transform it all into ready-to-use business data where security is a use case. 



Quote for the day:

“If my mind can conceive it, my heart can believe it, I know I can achieve it!” -- Jesse Jackson

Daily Tech Digest - September 17, 2020

Outbound Email Errors Cause 93% Increase in Breaches

Egress CEO Tony Pepper said the problem is only going to get worse with increased remote working and higher email volumes, which create prime conditions for outbound email data breaches of a type that traditional DLP tools simply cannot handle. “Instead, organizations need intelligent technologies, like machine learning, to create a contextual understanding of individual users that spots errors such as wrong recipients, incorrect file attachments or responses to phishing emails, and alerts the user before they make a mistake,” he said. The most common breach types were replying to spear-phishing emails (80%), emails sent to the wrong recipients (80%) and sending the incorrect file attachment (80%). Speaking to Infosecurity, Egress VP of corporate marketing Dan Hoy, said businesses reported an increase in outbound emails since lockdown, “and more emails mean more risk.” He called this a numbers game which has increased risk as remote workers are more susceptible and likely to make mistakes the more they are removed from security and IT teams. According to the research, 76% of breaches were caused by “intentional exfiltration.” Hoy confirmed this is a combination of employees innocently trying to do their job and not cause harm by sending files to webmail accounts, but this does increase risk “and you cannot ignore the malicious intent.”


‘The demand for cloud computing & cybersecurity professionals is on the rise’

The COVID-19 pandemic undoubtedly has disrupted the normalcy of every company across every sector. At Clumio, our primary focus continues to be the health and well-being of our people. While tackling the situation, we also need to keep pace with our professional duties. We made the transition to remote work immediately and are in constant touch with our employees to ensure they don’t feel isolated and remain focused on their work. We are encouraging employees to follow the best practices of remote work and motivating them to spend time on their emotional, mental and physical wellbeing during this time. We conduct Zoom happy hours frequently to stay connected and have fun. As part of the session, we also celebrated a virtual babyshower of one of our colleagues recently. We had our annual summer picnic and created wonderful memories while maintaining social distance, but staying together. During this time, we have also launched the India Research and Development center in Bangalore. Our India Center will drive front-end innovation and research to build cloud solutions. India has a huge talent pool in technology, and it is only growing. We have also started virtual hiring and onboarding during the pandemic. 


AI investment to increase but challenges remain around delivering ROI

ROI on AI is still a work in progress that requires a focus on strategic change. As companies progress in AI use, they often shift their focus from automating internal employee and customer processes to delivering on strategic goals. For example, 31% of AI leaders report increased revenue, 22% greater market share, 22% new products and services, 21% faster time-to-market, 21% global expansion, 19% creation of new business models, and 14% higher shareholder value. In fact, the AI-enabled functions showing the highest returns are all fundamental to rethinking business strategies for a digital-first world: strategic planning, supply chain management, product development, and distribution and logistics. The study found that automakers are at the forefront of AI excellence, as they accelerate AI adoption to deliver on every part of their business strategy, from upgrading production processes and improving safety features to developing self-driving cars. Of the 12 industries benchmarked in the study, automotive employs the largest AI teams. With the government actively supporting AI under its Society 5.0 program, Japanese companies lead the pack in AI adoption. 


The future of .NET Standard

.NET 5 and all future versions will always support .NET Standard 2.1 and earlier. The only reason to retarget from .NET Standard to .NET 5 is to gain access to more runtime features, language features, or APIs. So, you can think of .NET 5 as .NET Standard vNext. What about new code? Should you still start with .NET Standard 2.0 or should you go straight to .NET 5? It depends. App components: If you’re using libraries to break down your application into several components, my recommendation is to use netX.Y where X.Y is the lowest number of .NET that your application (or applications) are targeting. For simplicity, you probably want all projects that make up your application to be on the same version of .NET because it means you can assume the same BCL features everywhere. Reusable libraries: If you’re building reusable libraries that you plan on shipping on NuGet, you’ll want to consider the trade-off between reach and available feature set. .NET Standard 2.0 is the highest version of .NET Standard that is supported by .NET Framework, so it will give you the most reach, while also giving you a fairly large feature set to work with. We’d generally recommend against targeting .NET Standard 1.x as it’s not worth the hassle anymore. 


Fintech sector faces "existential crisis" says McKinsey

After growing more than 25% a year since 2014, investment into the sector dropped by 11% globally and 30% in Europe in the first half of 2020, says McKinsey, citing figures from Dealroom. In July 2020, after months of Covid-19-related lockdowns in most European countries, the drop was even steeper, 18% globally and 44% in Europe, versus the previous year. "This constitutes a significant challenge for fintechs, many of which are still not profitable and have a continuous need for capital as they complete their innovation cycle: attracting new customers, refining propositions and ultimately monetizing their scale to turn a profit," states the McKinsey paper. "The Covid-19 crisis has in effect shortened the runway for many fintechs, posing an existential threat to the sector." Analyzing fundraising data for the last three years from Dealroom, the conulstancy found that as much as €5.7 billion will be needed to sustain the EU fintech sector through the second half of 2021 — a point at which some sort of economic normalcy might begin to emerge. It is not clear where these funds will come from, however. Fintechs are largely unable to access loan bailout schemes due to their pre-profit status.


Artificial Intuition: A New Generation of AI

Artificial intuition is a simple term to misjudge in light of the fact that it seems like artificial emotion and artificial empathy. Nonetheless, it varies fundamentally. Experts are taking a shot at artificial emotions so machines can mirror human behavior all the more precisely. Artificial empathy aims to distinguish a human’s perspective in real-time. Along these lines, for instance, chatbots, virtual assistants and care robots can react to people all the more properly in context. Artificial intuition is more similar to human impulse since it can quickly survey the entirety of a circumstance, including extremely inconspicuous markers of explicit movement. The fourth era of AI is artificial intuition, which empowers computers to discover threats and opportunities without being determined what to search for, similarly as human instinct permits us to settle on choices without explicitly being told on how to do so. It’s like a seasoned detective who can enter a wrongdoing scene and know immediately that something doesn’t appear to be correct or an experienced investor who can spot a coming pattern before any other person.


Attacked by ransomware? Five steps to recovery

Arguably the most challenging step for recovering from a ransomware attack is the initial awareness that something is wrong. It’s also one of the most crucial. The sooner you can detect the ransomware attack, the less data may be affected. This directly impacts how much time it will take to recover your environment. Ransomware is designed to be very hard to detect. When you see the ransom note, it may have already inflicted damage across the entire environment. Having a cybersecurity solution that can identify unusual behavior, such as abnormal file sharing, can help quickly isolate a ransomware infection and stop it before it spreads further. Abnormal file behavior detection is one of the most effective means of detecting a ransomware attack and presents with the fewest false positives when compared to signature based or network traffic-based detection. One additional method to detect a ransomware attack is to use a “signature-based” approach. The issue with this method, is it requires the ransomware to be known. If the code is available, software can be trained to look for that code. This is not recommended, however, because sophisticated attacks are using new, previously unknown forms of ransomware. 


Struggling to Secure Remote IT? 3 Lessons from the Office

To prepare for the arrival of CCPA, business leaders told us they spent an average of $81.9 million on compliance during the last 12 months. Yet despite making investments in hiring (93%), workforce training (89%), and purchasing new software or services to ensure compliance (95%), 40% still felt unprepared for the evolving regulatory landscape. Why? Because the root causes were not addressed. Perhaps their IT operations and security teams worked in silos, creating complexity and narrowing their visibility into their IT estates. Maybe their teams were completely unaware that other departments introduced their own software into the environment. Or more commonly, the organization used legacy tooling that wasn't plugged into the endpoint management or security systems of the IT teams. These are just some of the root causes that keep organizations in the dark and prone to exploits. While the transition to remote work was swift, it has presented businesses with an opportunity to face these issues head-on. As workforces continue to work remotely, CISOs and CIOs now have the chance to evaluate how they effectively manage risk in the long term, which includes running continuous risk assessments and investing in solutions that deliver rapid incident response and improved decision-making.


CTO challenges around the return to the workplace

Every CTO tells us that the digital transformation and change management programmes designed to address the relentless regulatory, competitor, innovation and customer challenges must go ahead as planned, regardless of the pandemic. You may be tackling automating end-to-end electronic trading workflows or creating mobile framework applications. Whatever the focus, hampering the journey towards electronification, firms stumble over the limitations of legacy systems; trading desks still depend on quotes, orders and trades are processed from a multitude of external trading platforms, and inconsistency, lag and gaps all result in costly errors, which are missed opportunities at best, and regulatory reporting breaches and huge fines at worst. In the quest for efficiencies, mitigation of risk, and achieving seamless and future-proofed IT architecture, firms must automate to meet their regulatory obligations and deliver client, management and regulatory transparency. And this hasn’t even touched on achieving an ambition to create end-to-end, freely flowing models of perfectly clean, ordered and well-governed data. Every CTO needs to apply extraction and visualisation layers, and mine the data for valuable insights that can be fed further upstream.


The Case for Explainable AI (XAI)

Despite the numerous benefits to developing XAI, many formidable challenges persist. A significant hurdle, particularly for those attempting to establish standards and regulations, is the fact that different users will require different levels of explainability in different contexts. Models that are deployed to effectuate decisions that directly impact human life, such as those in hospitals or military environments, will produce different needs and constraints than ones utilized in low-risk situations There are also nuances within the performance-explainability trade-off. Infrastructure and systems designers are constantly balancing the demands of competing interests. ... There are also a number of risks associated with explainable AI. Systems that produce seemingly-credible but actually-incorrect results would be difficult to detect for most consumers. Trust in AI systems can enable deception by way of those very AI systems, especially when stakeholders provide features that purport to offer explainability where they actually do not. Engineers also worry that explainability could give rise to vaster opportunities for exploitation by malicious actors. Simply put, if it is easier to understand how a model converts input into output, it is likely also easier to craft adversarial inputs that are designed to achieve specific outputs.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - August 16, 2020

When to use Java as a Data Scientist

When you are responsible for building an end-to-end data product, you are essentially building a data pipeline where data is fetched from a source, features are calculated based on the retrieved data, a model is applied to the resulting feature vector or tensor, and the model results are stored or streamed to another system. While Python is great for modeling training and there’s tools for model serving, it only covers a subset of the steps in this pipeline. This is where Java really shines, because it is the language used to implement many of the most commonly used tools for building data pipelines including Apache Hadoop, Apache Kafka, Apache Beam, and Apache Flink. If you are responsible for building the data retrieval and data aggregating portions of a data product, then Java provides a wide range of tools. Also, getting hands on with Java means that you will build experience with the programming language used by many big data projects. My preferred tool for implementing these steps in a data workflow is Cloud Dataflow, which is based on Apache Beam. While many tools for data pipelines support multiple runtime languages, there many be significantly performance differences between the Java and Python options.


Alert: Russian Hackers Deploying Linux Malware

Analysts have linked Drovorub to the Russian hackers working for the GRU, the alert states, noting that the command-and-control infrastructure associated with this campaign had previously been used by the Fancy Bear group. An IP address linked to a 2019 Fancy Bear campaign is also associated with the Drovorub malware activity, according to the report. The Drovorub toolkit has several components, including a toolset consisting of an implant module coupled with a kernel module rootkit, a file transfer and port forwarding tool as well as a command-and-control server. All this is designed to gain a foothold in the network to create the backdoor and exfiltrate data, according to the alert. "When deployed on a victim machine, the Drovorub implant (client) provides the capability for direct communications with actor-controlled [command-and-control] infrastructure; file download and upload capabilities; execution of arbitrary commands as 'root'; and port forwarding of network traffic to other hosts on the network," according to the alert. Steve Grobman, CTO at the security firm McAfee, notes that the rootkit associated with Drovorub can allow hackers to plant the malware within a system and avoid detection, making it a useful tool for cyberespionage or election interference.


How Community-Driven Analytics Promotes Data Literacy in Enterprises

Data is deeply integrated into the business processes of nearly every company precisely because it is helping us make better decisions and not because of its ability to hasten lofty things, such as digital transformation. The C-suite sees the advantages data insights provide and as a result, non-technical employees are increasingly expected to be more technically adept at extraction and interpretation of data. Successful organizations foster a community of data curious teams and empower them with a single platform that enables everyone, regardless of technical ability, to explore, analyze and share data. Furthermore, domain experts and business leaders must be able to generate their own content, build off of content created by others and promote high-value, trustworthy content, while also demoting old, inaccurate, or unused content. This should resemble an active peer review process where helpful content is promoted and bad content is flagged as such by the community, while simultaneously being managed and governed by the data team.


The Anatomy of a SaaS Attack: Catching and Investigating Threats with AI

SaaS solutions have been an entry point for cyber-attackers for some time – but little attention is given to how the Techniques, Tools & Procedures (TTPs) in SaaS attacks differ significantly from traditional TTPs seen in networks and endpoint attacks. This raises a number of questions for security experts: how do you create meaningful detections in SaaS environments that don’t have endpoint or network data? How can you investigate threats in a SaaS environment? What does a ‘good’ SaaS environment look like as opposed to one that’s threatening? A global shortage in cyber skills already creates problems for finding security analysts able to work in traditional IT environments – hiring security experts with SaaS domain knowledge is all the more challenging. ... A more intricate and effective approach to SaaS security requires an understanding of the dynamic individual behind the account. SaaS applications are fundamentally platforms for humans to communicate – allowing them to exchange and store ideas and information. Abnormal, threatening behavior is therefore impossible to detect without a nuanced understanding of those unique individuals: where and when do they typically access a SaaS account, which files are they like to access, who do they typically connect with? 


How to maximise your cloud computing investment

“At the core of the issue is that with a conventional, router-centric approach, access to applications residing in the cloud means traversing unnecessary hops through the HQ data centre, resulting in inefficient use of bandwidth, additional cost, added latency and potentially lower productivity,” said Pamplin. “To fully realise the potential of cloud, organisations must look to a business-driven networking model to achieve greater agility and substantial CAPEX and OPEX savings. “When it comes to cloud usage, a business-driven network model should also give clear application visibility through a single pane of glass, or else organisations will be in the dark regarding their application performance and, ultimately, their return on investment. “Only through utilisation of advanced networking solutions, where application policies are centrally defined based on business intent, and users are connected securely and directly to applications wherever they reside, can the benefits of the cloud be truly realised. “A business-driven approach eliminates the extra hops and risk of security compromises. This ensures optimal and cost-efficient cloud usage, as applications will be able to run smoothly while fully supported by the network. ..."


AI Needs To Learn Multi-Intent For Computers To Show Empathy

Wael ElRifai, VP for solution engineering at Hitachi Vantara reminds us that teaching a chatbot multi-intent is a more manual process than we’d like to believe. He says that its core will be actions like telling the software to search for keywords such as “end” or “and”, which act as connectors for independent clauses, breaking down a multiple intent query into multiple single-intent queries and then using traditional techniques. “Deciphering intent is far more complex than just language interpretation. As humans, we know language is imbued with all kinds of nuances and contextual inferences. And actually, humans aren’t that great at expressing intent, either. Therein lies the real challenge for developers,” said ElRifai.  ... “In many cases, that’s what you need, but when we look more broadly at the kinds of problems that businesses face, across many different industries, the vast majority of problems actually don’t follow that ‘one thing well’ model all that well. Many of the things we’d like to automate are more like puzzles to be solved, where we need to take in lots of different kinds of data, reason about them and then test out potential solutions,” said IBM’s Cox.


Code Obfuscation: A Comprehensive Guide Towards Securing Your Code

Since code obfuscation brings about deep changes in the code structure, it may bring about a significant change in the performance of the application as well. In general, rename obfuscation hardly impacts performance, since it is only the variables, methods, and class which are renamed. On the other hand, control-flow obfuscation does have an impact on code performance. Adding meaningless control loops to make the code hard to follow often adds overhead on the existing codebase, which makes it an essential feature to implement, but with abundant caution. A rule of thumb in code obfuscation is that more the number of techniques applied to the original code, more time will be consumed in deobfuscation. Depending on the techniques and contextualization, the impact on code performance usually varies from 10 percent to 80 percent. Hence, potency and resilience, the factors discussed above, should become the guiding principles in code obfuscation as any kind of obfuscation (except rename obfuscation) has an opportunity cost. Most of the obfuscation techniques discussed above do place a premium on the code performance, and it is up to the development and security professionals to pick and choose techniques best suited for their applications.


Designing a High-throughput, Real-time Network Traffic Analyzer

Run-to-completion is a design concept which aims to finish the processing of an element as soon as possible, avoiding infrastructure-related interferences such as passing data over queues, obtaining and releasing locks, etc. As a data-plane component, sensitive to latency, the Behemoth’s (and some supplementary components) design relies on that concept. This means that, once a packet is diverted into the app, its whole processing is done in a single thread (worker), on a dedicated CPU core. Each worker is responsible for the entire mitigation flow – pulling the traffic from a NIC, matching it to a policy, analyzing it, enforcing the policy on it, and, assuming it’s a legit packet, returning it back to the very same NIC. This design results in great performance and negligible latency, but has the obvious disadvantage of a somewhat messy architecture, since each worker is responsible for multiple tasks. Once we’d decided that AnalyticsRT would not be an integral “station” in the traffic data-plane, we gained the luxury of using a pipeline model, in which the real-time objects “travel” between different threads (in parallel), each one responsible for different tasks.


RASP A Must-Have Thing to Protect the Mobile Applications

The concept of RASP is found to be very much effective because it helps in dealing with the application layer attacks. The concept also allows us to deal with custom triggers so that critical components or never compromised in the business. The development team should also focus on the skeptical approach about implementing the security solutions so that impact is never adverse. The implementation of these kinds of solutions will also help to consume minimal resources and will ensure that overall goals are very well met and there is the least negative impact on the performance of the application. Convincing the stakeholders was a very great issue for the organizations but with the implementation of RASP solutions, the concept has become very much easy because it has to provide mobile-friendly services. Now convincing the stakeholders is no more a hassle because it has to provide clear-cut visibility of the applications along with the handling of security threats so that working of solutions in the background can be undertaken very easily. The implementation of this concept is proven to be a game-changer in the company and helps to provide several aspects so that companies can satisfy their consumers very well. The companies can use several kinds of approaches which can include binary instrumentation, virtualization, and several other things.


Cyber Adversaries Are Exploiting the Global Pandemic at Enormous Scale

For cyber adversaries, the development of exploits at-scale and the distribution of those exploits via legitimate and malicious hacking tools continue to take time. Even though 2020 looks to be on pace to shatter the number of published vulnerabilities in a single year, vulnerabilities from this year also have the lowest rate of exploitation ever recorded in the 20-year history of the CVE List. Interestingly, vulnerabilities from 2018 claim the highest exploitation prevalence (65%), yet more than a quarter of firms registered attempts to exploit CVEs from 15 years earlier in 2004. Exploit attempts against several consumer-grade routers and IoT devices were at the top of the list for IPS detections. While some of these exploits target newer vulnerabilities, a surprising number targeted exploits first discovered in 2014 – an indication the criminals are looking for exploits that still exist in home networks to use as a springboard into the corporate network. In addition, Mirai (2016) and Gh0st (2009) dominated the most prevalent botnet detections, driven by an apparent growing interest by attackers targeting older vulnerabilities in consumer IoT products.



Quote for the day:

"Nothing is so potent as the silent influence of a good example." -- James Kent