Daily Tech Digest - June 17, 2021

A Deep Dive Into Efinity: Next-Generation Blockchain for NFTs

Efinity will be a hub for all fungible and non-fungible tokens, meant to serve and benefit all participants in the digital asset space—collectors, creators, artists, decentralized app (dApp) developers, enterprises, sports teams, and more. The Enjin ecosystem is robust, with a wide range of projects and developers using our products to create, distribute, and integrate NFTs with their projects. Over 1.14 billion digital assets have already been created with Enjin. All of these tokens can benefit from the cost efficiency, speed, and next-generation features of Efinity—and that’s only the existing Enjin ecosystem. We believe Efinity will do for the wider NFT ecosystem what ERC-1155 did for Ethereum: make NFTs even more accessible to everyone. We expect end-users to create NFTs with the same ease and as intuitively as they take a picture with a smartphone today; trade NFTs faster than they can purchase something from Amazon; and most importantly, use those tokens in a myriad of futuristic ways. It’s up to companies and developers across the world to give that next-gen utility to NFTs, and truly unlock their power to the masses.


A Look at a Zero Trust Strategy for the Remote Workforce

If you are new to the security world, it is fair to ask yourself, “Isn’t access to data and systems always conditional? Isn’t it always granted to someone who has access to the credentials (ID and password)?” True enough, but in totality, the approach to managing access encompasses a broader spectrum of privacy policies. These policies include a mix of different strategies that can be applied based on an organization’s security vulnerabilities. Conditional access is one such security management practice that many companies have opted for. The shift to smart mobile devices and cloud has made it necessary to ensure conditional access. Further, this has become imperative, as remote working is here to stay. With several companies making announcements about permanent work-from-home policies, a zero-trust model of conditional access has become crucial. IT security teams must be prepared to both validate and verify devices and users with a set of automated policies. IT teams could easily monitor incoming IP addresses as the first step for identifying credentials. However, growing use of VPNs coupled within a remote working environment is making that impossible, thus rendering organizations more vulnerable to threats.


Most firms face second ransomware attack after paying off first

The majority of businesses that choose to pay to regain access to their encrypted systems experience a subsequent ransomware attack. And almost half of those that pay up say some or all their data retrieved were corrupted. Some 80% of organisations that paid ransom demands experienced a second attack, of which 46% believed the subsequent ransomware to be caused by the same hackers. Amongst those that paid to regain access to their systems, 46% said at least some of their data was corrupted, according to a Cybereason survey released Wednesday. Conducted by Censuswide, the study polled 1,263 security professionals in seven markets worldwide, including 100 in Singapore, as well as respondents in Germany, France, the US, and UK. Globally, 51% retrieved their encrypted systems without any data loss, while 3% said they did not regain access to any encrypted data. The report revealed that one particular organisation reportedly paid up a ransomware amount in the millions of dollars, only to be targeted for a second attack by the same attackers within a fortnight.


Top 10 Security Risks in Web Applications

Injection or SQL injection is a type of security attack in which the malicious attacker inserts or injects a query via input data (as simple as via filling a form on the website) from the client-side to the server. If it is successful, the attacker can read data from the database, add new data, update data, delete some data present in the database, issue administrator commands to carry out privileged database tasks, or even issue commands to the operating system in some cases. ... It is a case where the authentication system of the web application is broken and can result in a series of security threats. This is possible if the adversary carries out a brute force attack to disguise itself as a user, permitting the users to use weak passwords that are either dictionary words or common passwords like “12345678”, “password” etc. This is so common because shockingly 59% of the people use the same passwords on all websites they use. Moreover, 90% of the passwords can be cracked in close to 6 hours! Therefore, it is important to permit users to use strong passwords with a combination of alphanumeric and special characters. This is also possible due to credential stuffing, URL rewriting, or not rotating session IDs.


A Google AI Designed a Computer Chip as Well as a Human Engineer—But Much Faster

Human designers thought “there was no way that this is going to be high quality. They almost didn’t want to evaluate them,” said Goldie. But the team pushed the project from theory to practice. In January, Google integrated some AI-designed elements into their next-generation AI processors. While specifics are being kept under wraps, the solutions were intriguing enough for millions of copies to be physically manufactured. The team plans to release its code for the broader community to further optimize—and understand—the machine’s brain for chip design. What seems like magic today could provide insights into even better floorplan designs, extending the gradually-slowing (or dying) Moore’s Law to further bolster our computational hardware. Even tiny improvements in speed or power consumption in computing could make a massive difference. “We can…expect the semiconductor industry to redouble its interest in replicating the authors’ work, and to pursue a host of similar applications throughout the chip-design process,” said Kahng.

Jensen Huang On Metaverse, Proof Of Stake And Ethereum

For a long time now, Proof of stake has been baffling people interested in crypto and its application in various platforms like Twitter and Project Bluesky. Jensen’s views on the matter have also been favourable to the concept that might replace proof of work in blockchain shortly. He said that the demand for Ethereum had reached such a level that it would be nice to have another method of confirming transactions. “Ethereum has established itself. It now has an opportunity to implement a second generation that carries on that platform approach and all of her services that are built on top of it, he added” Jensen also explained that the reason behind the development of Nvidia’s CMP was the expectation that a lot of Ethereum coins will be mined. CMP has enough functionality that it can be used for crypto mining. ... Addressing the question of how long the chip shortage will last, Jensen said that demand has been growing up consistently, and Nvidia particularly has had pent-up demand since it had reset and reinvested computer graphics, a driving factor in skyrocketing demand. 


Prioritizing and Microservices

Microservices frequently need to communicate with one another in order to accomplish their tasks. One obvious way for them to do so is via direct, synchronous calls using HTTP or gRPC. However, using such calls introduces dependencies between the two services involved, and reduces the availability of the calling service (because when the destination service is unavailable, the calling service typically becomes unavailable as well). This relationship is described by the CAP theorem (and PACELC) which I've described previously. .... If any response is necessary, the processing service publish an event, which the initiating service can subscribe to and consume. ... The issue with this approach is that the prioritization is only applied at the entrance to the system, and is not enforced within it. This is exacerbated by the fact that the report orchestrator has no FIFO expectation and in fact can begin work on an arbitrary number of commands at the same time, potentially resulting in a very large amount of work in process (WIP). We can use Little's Law to understand how WIP impacts the time it takes for requests to move through a system, which can impact high priority SLAs. Constraining total WIP on the system, or at least on the orchestrator, would mitigate the issue.


Cloud Outage Fallout: Should You Brace for Future Disruption?

The outage also put other topics in focus that might not have received consistent attention in the past. Though DevOps is frequently talked about in enterprise development circles, Bates questions to what degree it is being implemented. “If we can truly get to a DevOps world, securing development and operations, it’s going to help a lot,” he says. “We talk very glibly about DevOps, but we don’t ask the really hard questions about if anyone is really doing this.” Taken into context of sudden moves to the cloud in response to the pandemic, the Fastly outage was a relatively quick blip, says Drew Firment, senior vice president of transformation with cloud training platform A Cloud Guru. The incident does offer a moment for reflection for organizations. “Folks are looking at their cloud architecture,” he says. “Architecture equals operations.” As organizations build in the cloud, decisions on cloud providers and services can have a dramatic effect on resiliency, Firment says. “That’s why cloud architects are in such demand, especially if they can take those things into consideration.”


Proactive and reactive: symbiotic sides of the same AI coin

Artificial Intelligence (AI) as a phrase is bandied about to refer to any number of technologies currently in use. And it’s not that this is wrong per se, but it’s like referring to rustic Italian cuisine and molecular gastronomy simply as “food”. The world would be a poorer place without either, but they serve entirely separate purposes for the palate. According to Gartner, “By 2025, proactive (outbound) customer engagement interactions will outnumber reactive (inbound) customer engagement interactions.” The distinction being made here is the AI as it is designed for use in the reactive realm (think chatbots) vs. the use case of proactive engagement. While the core technology that underlies both may be similar, and both have specific use cases, proactive engagement is a more focused utilisation. If you have ever attempted to play the game ‘Twenty Questions’, you have had an inkling of what a chatbot is attempting to do, i.e., asking a series of questions of an individual in an effort to get at an answer. Except in the case of chatbots, you are usually playing the game with an irate customer in a negative frame of mind. 


Are your cryptographic keys truly safe? Root of Trust redefined for the cloud era

When you are working with cloud infrastructure, the hardware (and in many cases also the software) is not under your control. This is also true of cloud-based HSMs provided by cloud service providers (CSPs). You need to look no further than the CLOUD Act to realize that your CSPs have immediate access to your keys and data. This is not theoretical access – this report published by Amazon details the law enforcement data requests with which Amazon complied over a six month period in 2020. It’s not a big jump to imagine an insider at your CSP exploiting this ability to expose your keys. While CSPs make genuine efforts to secure their hardware under the Shared Responsibility Model, the nature of the beast is that using third-party infrastructure also leaves you vulnerable to supply chain attacks. Consider the attack on SolarWinds and imagine the repercussions of your CSP – and by extension you – falling victim to such a large-scale supply chain attack. It’s clear that the implementation of Root of Trust as a purely hardware solution deployed in a single location needs to move with the times.



Quote for the day:

"No person can be a great leader unless he takes genuine joy in the successes of those under him. -- W. A. Nance

No comments:

Post a Comment