Daily Tech Digest - June 25, 2018

What Is A Zero-Day Exloit? A Powerful But Fragile Weapon

skull and crossbones in binary code
A zero-day is a security flaw that has not yet been patched by the vendor and can be exploited and turned into a powerful but fragile weapon. Governments discover, purchase, and use zero-days for military, intelligence and law enforcement purposes — a controversial practice, as it leaves society defenseless against other attackers who discover the same vulnerability. Zero-days command high prices on the black market, but bug bounties aim to encourage discovery and reporting of security flaws to the vendor. The patching crisis means zero-days are becoming less important, and so-called 0ld-days become almost as effective. A zero-day gets its name from the number of days that a patch has existed for the flaw: zero. Once the vendor announces a security patch, the bug is no longer a zero-day (or "oh-day" as the cool kids like to say). After that the security flaw joins the ranks of endless legions of patchable but unpatched old days. In the past, say ten years ago, a single zero-day might have been enough for remote pwnage. This made discovery and possession of any given zero-day extremely powerful.



Address network scalability requirements when selecting SD-WAN


Calculating scalability based on the number of sites can be trickier. Not only do scalability requirements include provisioning sufficient bandwidth for all your sites, but network architecture matters when considering the scale needed to support a large number of branches. Some SD-WAN offerings are designed to spin up a virtual pipe from every site to every other site and maintain it perpetually. That option puts a large burden of VPN management on the service, which grows exponentially with the number of sites. Other SD-WAN services may also depend on VPNs, but without the need to have each VPN on constantly. For example, the service might allow customers to precalculate some of the necessary operating parameters for the VPNs and instantiate them only when needed for a network session. This option can support far more nodes than the previous design. Still, other SD-WAN products take a different approach entirely, without big VPN meshes. These employ architectures where the work of supporting the N+1st site is the same as the work of supporting the second site. This design could support even more nodes.


Ex-Treasury Official Touts the Promise of Fintech ‘Sandboxes’

As it stands now, “there’s nothing that calls itself a sandbox” in the U.S., Crane said. But comments by a Treasury official on Thursday at a SIFMA event about an upcoming report by Treasury on such reg tools signals promise of movement. What exactly is a “regulatory sandbox”? As the RegTechLab report explains, it’s a new tool allowing companies “to test new products, services, delivery channels or business models in a live environment, subject to appropriate conditions and safeguards.” Regulators, the report continues, “have also taken other proactive steps to engage with industry directly, and in some cases pursued mechanisms less formal than sandboxes to facilitate testing or piloting of new innovations.” Craig Phillips, counselor to the Treasury secretary, weighed in on Thursday that the financial services landscape “has over 3,300 new fintech companies” and “over 20% of all personal loans” originate in the fintech marketplace. “We need a new approach by regulators that permits experimentation for services and processes,” said Phillips, adding that it could include regulatory sandboxes, aka innovation facilitators.


Adapting to the rise of the holistic application


A shift in mindset is needed. McFadin says it is much harder to call a project “done”, as each element can be changed or updated at any time. While the services can be more flexible, it is necessary to think differently about the role of software developers. Companies that have implemented agile development properly should be equipped to manage this change more effectively. However, those that just namecheck agile, or don’t engage in the process fully, may well struggle. Eric Mizell, vice-president of solution engineering at software analytics company OverOps, claims the new composable, containerised, compartmentalised world of software is creating headaches for those tasked with the challenge of maintaining the reliability of these complex applications. “Even within the context of monolithic applications, our dependence on 30-year old technology, such as logging frameworks to identify functional issues in production code, is sub-standard at best – within the context of microservices and holistic applications, it is nearly worthless,” says Mizell.


Blockchain Watchers Say Decentralized Apps Are Around The Corner

More than a decade ago, Apple had to deal with that perennial chicken-and-egg problem: finding killer apps that made people to want to buy an iPhone. Developers building apps on blockchain technology face the same dilemma. Not enough people are using browsers and tokens that run on a blockchain network, so it’s hard to amass the number of users needed to propel a new app to success. But that hasn’t stopped people from trying or researchers from divining that decentralized apps, or “dapps,” really are just around the corner. One recent report from Juniper Research, a market intelligence firm in the U.K., states that in the coming year we’ll see a “significant expansion” in the deployment of dapps built on blockchain technology. Regular iPhone and Android users should be able to download a dapp on their smartphone “by the end of the year,” Juniper's head of forecasting, Windsor Holden toldForbes, adding that the dapps most likely to first gain mass adoption would deal with verifying identity or tracking the provenance of products or food in the supply chain.


IoT could be the killer app for blockchain

abstract blockchain representation of blocks and nodes
The rise of edge computing is critical in scaling up tech deployments, owing to reduced bandwidth requirements, faster application response times and improvements in data security, according to Juniper research. Blockchain experts from IEEE believe that when blockchain and IoT are combined they can actually transform vertical industries. While financial services and insurance companies are currently at the forefront of blockchain development and deployment, transportation, government and utilities sectors are now engaging more due to the heavy focus on process efficiency, supply chain and logistics opportunities, said David Furlonger, a Gartner vice president and research fellow. For example, pharaceuticals are required by law to be shipped and stored in temperature-controlled conditions, and data about that process is required for regulatory compliance. The process for tracking drug shipments, however, is highly fragmented. Many pharmaceutical companies pay supply chain aggregators to collect the data along the journey to meet the regulatory standards.


Serverless Native Java Functions using GraalVM and Fn Project

The Fn Project is an open-source container-native serverless platform that you can run anywhere — any cloud or on-premise. It’s easy to use, supports every programming language, and is extensible and performant. It is an evolution of the IronFunctions project from iron.io and is mainly maintained by Oracle. So you can expect enterprise grade solution, like first class support for building and testing. It basically leverages of the container technology to run and you can get started very quickly, the only prerequisite is Docker installed. ... Java is often blamed as being heavy and not suitable for running as serverless function. That fame does not come from nothing, it sometimes needs a full JRE to run with slow startup time and high memory consumption compared to other native executables like Go. Fortunately this isn't true anymore, with new versions of Java you can create modular applications, compile Ahead-of-Time and have new and improved garbage collectors using both OpenJDK and OpenJ9 implementations. GraalVM is new flavor that delivers a JVM that supports multiple languages and compilation into native executable or shared library.


Data Science for Startups: Deep Learning


Deep learning provides an elegant solution to handling these types of problems, where instead of writing a custom likelihood function and optimizer, you can explore different built-in and custom loss functions that can be used with the different optimizers provided. This post will show how to write custom loss functions in Python when using Keras, and show how using different approaches can be beneficial for different types of data sets. I’ll first present a classification example using Keras, and then show how to use custom loss functions for regression. The image below is a preview of what I’ll cover in this post. It shows the training history of four different Keras models trained on the Boston housing prices data set. Each of the models use different loss functions, but are evaluated on the same performance metric, mean absolute error. For the original data set, the custom loss functions do not improve the performance of the model, but on a modified data set, the results are more promising.


REST API Error Handling — Problem Details Response

RFC 7807 defines a "problem detail" as a way to carry machine-readable details of errors in an HTTP response to avoid the need to define new error response formats for HTTP APIs. By providing more specific machine-readable messages with an error response, the API clients can react to errors more effectively, and eventually, it makes the API services much more reliable from the REST API testing perspective and the clients as well. In general, the goal of error responses is to create a source of information to not only inform the user of a problem but of the solution to that problem as well. Simply stating a problem does nothing to fix it — and the same is true of API failures. RFC 7807 provides a standard format for returning problem details from HTTP APIs. ... The advantages of using this can be a unification of the interfaces, making the APIs easier to build, test and maintain. Also, I think that more advantages will come in the future when more and more API providers will adjust to this standard.


Protecting IoT components from being physically compromised


Disruption of these industrial devices can cause catastrophic events in an international scale, hence the importance to implement security solutions in front of a variety of attack vectors. The sole purpose is to prevent the intrusion of unauthorized (external or internal) actors and avoid disruption of critical control processes. This is not a theory but rather a disturbing fact. In 2017, a group of researchers from Georgia Tech developed a worm named "LogicLocker" that caused several PLC models to transmit incorrect data to the systems they control and as a result led to harmful implications. The common security methods of industrial networks are based mainly on the integration of dedicated network devices which are connected to the traffic artery at central junctions (usually next to network switches). This security method sniffs the data flow between the PLCs themselves, between the PLCs and the cloud (public or private) and between the user interface (HMI) and the cloud.



Quote for the day:


"Always and never are two words you should always remember never to use." -- Wendell Johnson


No comments:

Post a Comment