Daily Tech Digest - June 05, 2018

10 Open Source Security Tools You Should Know

(Image: Anemone123)
The people, products, technologies, and processes that keep businesses secure all come with a cost — sometimes quite hefty. That is just one of the reasons why so many security professionals spend at least some of their time working with open source security software. Indeed, whether for learning, experimenting, dealing with new or unique situations, or deploying on a production basis, security professionals have long looked at open source software as a valuable part of their toolkits.  However, as we all are aware, open source software does not map directly to free software; globally, open source software is a huge business. With companies of various sizes and types offering open source packages and bundles with support and customization, the argument for or against open source software often comes down to its capabilities and quality. For the tools in this slide show, software quality has been demonstrated by thousands of users who have downloaded and deployed them. The list is broken down, broadly, into categories of visibility, testing, forensics, and compliance. If you don't see your most valuable tool on the list, please add them in the comments.



The growing ties between networking roles and automation

Automation was expected to steal jobs and replace human intelligence. But as network automation use cases have matured, Kerravala said, employees and organizations increasingly see how automating menial network tasks can benefit productivity. To automate, however, network professionals need programming skills to determine the desired network output. They need to be able to tell the network what they want it to do. All of this brings me to an obvious term that's integral to automation and network programming: program, which means to input data into a machine to cause it to do a certain thing. Another definition says to program is "to provide a series of instructions." If someone wants to give effective instructions, a person must understand the purpose of the instructions being relayed. A person needs the foundation -- or the why of it all -- to get to the actual how. Regarding network automation, the why is to ultimately achieve network readiness for what the network needs to handle, whether that's new applications or more traffic, Cisco's Leary said.


5 ways location data is making the world a better place

A salesperson (L) talks with a visitor in front of a map showing the location of an apartment complex which is currently under construction at its showroom in Seoul March 18, 2015. While activity is soaring, with the number of transactions at a 7-year high, housing prices are rising at a glacial pace as heavy household debt and a fast-ageing population keep a lid on price growth. To match story SOUTHKOREA-ECONOMY/HOUSING Picture taken on March 18. REUTERS/Kim Hong-Ji
In the insurance sector, detailed data creates better predictions and more accurate customer quotes. Yet potential purchasers often don’t know the information needed for rigorous risk assessments, such as the distance of their house from water. Furthermore, lengthy and burdensome questionnaires can lose firms business; analysis from HubSpot found, by reducing form fields, customer conversions improve. PCA Predict uses its Location Intelligence platform to compile free data from the Land Registry and Ordinance Survey, including LiDAR height maps, as well as commercial address data, to determine accurate information on a potential customer’s property, such as distance from a river network, height, footprint, if the property is listed and its risk of wind damage. The model is also being developed to determine a building’s age using machine-learning and road layout. “We take disparate datasets and apply different types of analysis to extract easy-to-use attributes for insurers,” says Dr Ian Hopkinson, senior data scientist at GBG, the parent company of PCA.


Adoption of Augmented Analytics Tools Is Increasing Among Indian Organizations

Indian organizations are increasingly moving from traditional enterprise reporting to augmented analytics tools that accelerate data preparation and data cleansing, said Gartner, Inc. This change is set to positively impact the analytics and business intelligence (BI) software market in India in 2018. Gartner forecasts that analytics and BI software market revenue in India will reach US$304 million in 2018, an 18.1 percent increase year over year. ... "Indian organizations are shifting from traditional, tactical and tool-centric data and analytics projects to strategic, modern and architecture-centric data and analytics programs," said Ehtisham Zaidi, principal research analyst at Gartner. "The 'fast followers' are even looking to make heavy investments in advanced analytics solutions driven by artificial intelligence and machine learning, to reduce the time to market and accuracy of analytics offerings."


Apple’s Core ML 2 vs. Google’s ML Kit: What’s the difference?

core ml 2
A major difference between ML Kit and Core ML is support for both on-device and cloud APIs. Unlike Core ML, which can’t natively deploy models that require internet access, ML Kit leverages the power of Google Cloud Platform’s machine learning technology for “enhanced” accuracy. Google’s on-device image labeling service, for example, features about 400 labels, while the cloud-based version has more than 10,000. ML Kit offers a couple of easy-to-use APIs for basic use cases: text recognition, face detection, barcode scanning, image labeling, and landmark recognition. Google says that new APIs, including a smart reply API that supports in-app contextual messaging replies and an enhanced face detection API with high-density face contours, will arrive in late 2018. ML Kit doesn’t restrict developers to prebuilt machine learning models. Custom models trained with TensorFlow Lite, Google’s lightweight offline machine learning framework for mobile devices, can be deployed with ML Kit via the Firebase console, which serves them dynamically.


How to evaluate web authentication methods

user authentication
Two attributes I hadn’t give a lot of thought to are “requiring explicit consent” and “resilient to leaks from other verifiers.” The former ensures that a user’s authentication is not initiated without them knowing about it, and the latter is about preventing related authentication secrets from being used to deduce the original authentication credential. The authors evaluate all the covered authentication solutions across all attributes, and they include a nice matrix chart so you can see how each compared to the other. It’s a genius table that should have been created a long time ago. The authors rate each authentication option as satisfying, not satisfying or partially satisfying each attribute. The attributes aren’t ranked, but anyone could easily take the unweighted framework, add or delete attributes, and weight it with their own needed importance. For example, many authentication evaluators looking for real-world solutions will want to add cost (both initial and ongoing) and vendor product solutions. The author’s candid conclusions include: “A clear result of our exercise is that no [authentication] scheme we examined is perfect – or even comes close to perfect scores.”


Advanced Architecture for ASP.NET Core Web API


Before we dig into the architecture of our ASP.NET Core Web API solution, I want to discuss what I believe is a singlebenefit which makes .NET Core developers lives so much better; that is, DependencyInjection (DI). Now, I know you will say that we had DI in .NET Framework and ASP.NET solutions. I will agree, butthe DI we used in the past would be from third-party commercial providers or maybe open source libraries. They did a good job, butfor a good portion of .NET developers, there was a big learning curve, andall DI libraries had their uniqueway of handling things. Today with .NET Core, we have DI built right into the framework from the start. Moreover,it is quite simple to work with, andyou get it out of the box. The reason we need to use DI in our API is that it allows usto have the best experience decoupling our architecture layers and also to allowus to mock the data layer, or have multiple data sources built for our API. To use the .NET Core DI framework, justmake sure your project references the Microsoft.AspNetCore.AllNuGet package (which contains a dependency on Microsoft.Extnesions.


Intuitively Understanding Convolutions for Deep Learning


The advent of powerful and versatile deep learning frameworks in recent years has made it possible to implement convolution layers into a deep learning model an extremely simple task, often achievable in a single line of code. However, understanding convolutions, especially for the first time can often feel a bit unnerving, with terms like kernels, filters, channels and so on all stacked onto each other. Yet, convolutions as a concept are fascinatingly powerful and highly extensible, and in this post, we’ll break down the mechanics of the convolution operation, step-by-step, relate it to the standard fully connected network, and explore just how they build up a strong visual hierarchy, making them powerful feature extractors for images. The 2D convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the 2D input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel.


Windows Server 2019 embraces SDN

windows server 2019
The new virtual networking peering functionality in Windows Server 2019 allows enterprises to peer their own virtual networks in the same cloud region through the backbone network. This provides the ability for virtual networks to appear as a single network. Fundamental stretched networks have been around for years and have provided organizations the ability to put server, application and database nodes in different sites. However, the challenge has always been the IP addressing of the nodes in opposing sites. When there are only two static sites in a traditional wide area network, the IP scheme was relatively static. You knew the subnet and addressing of Site A and Site B. However, in the public cloud and multi-cloud world – where your target devices may actually shift between racks, cages, datacenters, regions or even hosting providers – having addresses that may change based on failover, maintenance, elasticity changes, or network changes creates a problem. Network administrators have already spent and will drastically increase the amount of time they spend addressing, readdressing, updating device tables, etc to keep up with the dynamic movement of systems.


Managing a hybrid cloud computing environment


Ensuring the security of physical edge networking connections and the connectivity of all communication is equally essential. This requires redundant networking components that utilize built-in failover capabilities. Finally, careful selection of the power infrastructure is vital to supporting all elements of edge computing. The ability to maintain power at all times via the use of backup power and integration of the remote monitoring of the power infrastructure into the customer’s management system are paramount. You can do this by seeking UPSs, rackmount power distribution units (PDUs) and power management software with remote capabilities. Being able to remotely reboot UPSs or PDUs can be extremely helpful in edge applications. In addition, solutions like Eaton’s Intelligent Power Manager software can enhance your disaster avoidance plan by allowing you to set power management alerts, configurations and action policies. By creating action policies for remediation, Eaton enables you to automate server power capping, load shedding and/or virtual machine migration should problems occur.



Quote for the day:


"Don't be buffaloed by experts and elites. Experts often possess more data than judgement." -- Colin Powell


No comments:

Post a Comment