Daily Tech Digest - May 29, 2020

Cases dealt with by AI courts rely heavily on blockchain evidence. For the uninitiated, blockchain is literally a chain of digital blocks. It is the system of storing digital information (the block) in a public database (the chain). Blockchain preserves information about transactions like the date, time and purchase amount etc. A classic illustration would be a purchase on Amazon. It contains a series of transactions which are recorded and kept on a digital platform. Each ‘block’ added to the ‘chain’ comes into the public domain, where it remains preserved. The critical question is, is blockchain tamper-proof? Is alteration of its data impossible by human intervention? Is blockchain data immutable and time-stamped, and can it safely be used as an auditable trail? The judges in China think so. China’s Supreme People’s Court has put matters to rest. It has ruled that evidence authenticated with blockchain technology is binding in legal disputes. It ruled, "...internet courts shall recognize digital data that are submitted as evidence if relevant parties collected and stored these data via blockchain with digital signatures, reliable timestamps and hash value verification or via a digital deposition platform, and can prove the authenticity of such technology used."


GitHub Supply Chain Attack Uses Octopus Scanner Malware

When Octopus Scanner lands on a machine, it looks for signs indicating the NetBeans IDE is in use on a developer's system, GitHub security researcher Alvaro Muñoz explains in a blog post on their findings. If it doesn't find anything, the malware takes no action. If it does, it ensures that every time a project is built, any resulting JAR files are infected with a dropper. When executed, the payload ensures persistence and spreads a remote access Trojan (RAT), which connects to C2 servers. The malware continues to spread by infecting NetBeans projects, or JAR files. This way, it backdoors healthy projects so when developers release code to the public, it contains malware. The goal of Octopus Scanner is to insert backdoors into artifacts built by NetBeans so the attacker can use these resources as part of the C2 server, Waisman says. "When the end user deploys the workload, they have given the attacker access via the backdoor to their resources for use as part of a command-and-control server," he adds. 


How the coronavirus pandemic is affecting developers' mental health

Working from home has always included controversy. While two-thirds of employees prefer to do so--more than a third would choose this perk over a pay raise and another 37% would take a 10% pay cut to stay home--management has traditionally been less than thrilled with the idea. It's often been viewed by executives as a way for workers to underperform in their roles or fly under the radar. As a result, given that many organizations now have no choice but to promote work-from-home capabilities, these are being doled out with increased expectations and heftier accountability requirements. The economic downturn and threat of looming layoffs don't help the situation. I can say I've put in more hours than ever before proving my value in my role to ensure that the systems and services for which I am responsible stay up and running. ... Without commutes it can seem like there are more hours in the day, but at the same time there aren't clear breaks between home and work time, nor the regular breaks for mentally recharging like going out for coffee or even just visiting the snack area and talking to coworkers.


Create Deepfakes in 5 Minutes with First Order Model Method

The basis of deepfakes, or image animation in general, is to combine the appearance extracted from a source image with motion patterns derived from a driving video. For these purposes deepfakes use deep learning, where their name comes from (deep learning + fake). To be more precise, they are created using the combination of autoencoders and GANs. Autoencoder is a simple neural network, that utilizes unsupervised learning (or self-supervised if we want to be more accurate). They are called like that because they automatically encode information and usually are used for dimensionality reduction. It consists of three parts: encoder, code, and decoder. The encoder will process the input, in our case input video frame, and encode it. This means that it transforms information gathered from it into some lower-dimensional latent space – the code. In this latent representation information about key features like facial features and body posture of the video-frame is contained. In lame terms, here we have information about what face is doing, does it smile or blinks, etc. 


Mobile security forces difficult questions

When it comes to security, compliance and what IT or Security have the right to do, neither is demonstrably better, unless you're willing to put rights and restrictions in writing and — this is the hard part — enforce them. The biggest worry for either modes involves remote wipe. When a device is suspected to have been stolen, remote wipe needs to happen, to reduce the chance of enterprise data being stolen or an attack being waged. That question becomes difficult when the device is owned by the employee. Does the enterprise have the right to wipe it and permanently delete any personal data, images, messages, videos, etc.? We'll get back to BYOD deletions in a moment. But for corporate devices, the deletion would seem to be much easier. And yet, it's not. Many companies encourage employees to not use the corporate mobile device for anything other than work, but few put it in writing and stress that the company may have to obliterate everything on the phone in the case of a perceived security emergency — and insist that it be signed before the phone is distributed.


Digital Distancing with Microsegmentation

Microsegmentation improves data center security by controlling the network traffic into and out of a network connection. Ultimately, the goal of microsegmentation is to implement Zero Trust. Done properly, microsegmentation is effectively a whitelist for network traffic. This means that systems on any given network can strictly communicate with the specific systems they need to communicate with, in the manner they are supposed to communicate, and nothing else. With connections and communications so regimented, microsegmentation is among the best protections we have today against lateral compromise. This allows microsegmentation administrators to protect whatever is on the other end of that network connection from whatever else is on the network. It also allows everything else on the network to receive a basic level of protection from whatever might be on the other end of that network connection. This is a huge change from the "eggshell computing" model in which all defenses are concentrated at the perimeter (the eggshell) but everything behind that edge is wide open (the soft insides of the egg). 


Cisco Throws Its Weight Behind SASE

SASE represents an opportunity to put more of Cisco’s networking and security services in the cloud, said Jeff Reed, SVP of product for Cisco’s security business group. Cisco’s SASE offering will tie together elements of its networking, security, and zero-trust product lines. This includes elements of its Viptela and Meraki SD-WAN platforms to address SASE’s WAN and routing requirements. Meanwhile, for security, the vendor will lean on Cisco Umbrella for secure web gateway, domain name system (DNS), firewall, and cloud access security broker (CASB) functionality. Finally, Cisco will integrate core elements of its zero-trust networking portfolio — which includes Duo, SD-Access, and AnyConnect — to verify identity and enhance the overall security of the offering. “We had this opportunity … to basically tie all the strength we have on the network side into the abilities and capabilities we have on the security side,” Reed said. But Reed emphasizes that Cisco won’t be “lifting and shifting” existing constructs and running them in the cloud. Cisco is fully embracing the cloud-native underpinnings of SASE, he said. “We’re doing cloud native, so we’re not just lifting and shifting our virtual firewall in the cloud.”


Compare a product vs. project mindset for software development

Enterprises have started to recognize the danger of a project mindset, namely, that everyone focuses less on the product. "A perfect project management system can complete every task ... in a vacuum, with amazing results -- and still fail when it comes time to go to market," said Alexander M. Kehoe, operations director at Caveni Digital Solutions, a web design consultancy. Apple has applied both project and product mindsets. Apple's iPhone innovation enabled it to grow into one of the largest companies in the world. However, critics accuse Apple of releasing a nearly carbon-copy iPhone each year. According to these critics, product quality for these phones has stagnated, as Apple finishes projects with little or no consideration on the product side. Because of this reliance on project-oriented thinking, Kehoe said, the next major mobile phone innovation might not come from Apple. If another company takes the lead in mobile phone innovation, Apple might see its market dominance evaporate overnight, he said.



Report: Debugging Efforts Cost Companies $61B Annually


The report also notes software engineers spend on average of 13 hours to fix a single software failure. According to the report, 41% said identified reproducing a bug as the biggest barrier to finding and fixing bugs faster, followed by wiring tests (23%) and actually fixing the bug (23%). Well over half (56%) said they could release software one to two days faster if reproducing failures were not an issue. Just over a quarter of developer time (26%) is spent reproducing and fixing failing tests. On the plus side, 88% of respondents said their organizations have adopted continuous integration (CI) practices, with more than 50% of businesses reporting they can deploy new code changes and updates at least daily. Over a third (35%) said they can make hourly deployments. Undo CEO Barry Morris said the report makes it clear organizations need to be able to record software to reduce the amount of time it takes to find bugs. Unfortunately, even then finding a bug is still a labor-intensive process that can involve analyzing millions of lines of code.


Using Cloud AI for Sentiment Analysis

Natural Language Toolkit (NLTK) is a powerful Python library for natural language processing (NLP) and machine learning. Popular cloud services offer some alternative NLP tools that use the same underlying concepts as NLTK. ... If you've followed through the NLP sentiment analysis articles we started in Introducing NLTK for Natural Language Processing, you've seen one established approach. The following overviews will show you what the interface and response look like for sentiment analysis on these cloud services. In many cases it's very similar to NLTK, just using the horsepower of someone else's computers. Amazon Web Services (AWS) provides an Amazon Comprehend NLP service that includes a range of features analogous to some of what you’ll find in NLTK. Similar to NLTK’s pos_tag, the AWS service can identify parts of speech (POS) and tag them as proper names, places, and locations, and so forth. It has support for 100 languages that can be detected in unstructured text, and includes text summarization capabilities to identify and extract key phrases that contribute to the overall meaning of a given piece of text.



Quote for the day:

"If you're not prepared to be wrong, you'll never come up with anything original." -- Sir Ken Robinson


No comments:

Post a Comment