Daily Tech Digest - July 12, 2018

WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10
Organizations invest to increase network capacity, ultimately accommodating fictitious demand. Accurate distinction between human traffic and bot-based traffic, and between “good” bots (like search engines and price comparison services) and “bad” bots, can translate into substantial savings and an uptick in customer experience. The bots won’t make it easy on you now as they can mimic human behavior, bypass CAPTCHA and other challenges. Moreover, dynamic IP attacks render IP-based protection ineffective. Often times, open source dev tools (Phantom JS for instance) that can process client-side JavaScript are abused to launch brute-force, credential stuffing, DDoS and other automated bot attacks. To manage the bot-generated traffic effectively, a unique identification (like a fingerprint) of the source is required. Since bot attacks use multiple transactions, the fingerprint allows organizations to track suspicious activity, attribute violation scores and make an educated block/allow decision at a minimum false-positives rate.



Why your whole approach to security has to change imageNew technology philosophies like DevOps and Agile provide the opportunity to build security into the whole lifecycle that exists around IT use. By embedding proper security processes around cloud resources, companies can make their workflows deliver security into the fabric of this new architecture from the start. Getting this degree of oversight and security in place involves making security goals and objectives clear to everyone, while also enabling those processes to run smoothly and effectively. It involves making security management into more than just a blocker for poor software; instead, it is about making services available quickly within those workflows. This process is termed transparent orchestration. Transparent orchestration involves a re-wiring of security to match how this IT infrastructure has been rebuilt. As part of this, security must be automatically provisioned across a complete mix of internal and external networks, spanning everything from legacy data centre IT through to multi-cloud ecosystems and new container-based applications.



Top 3 practical considerations for AI adoption in the enterprise

Artificial intelligence computer brain circuits electronics grid
Explainable AI centers on the ability to answer the question, “Why?” Why did the machine make a specific decision? The reality is many new versions of AI that have emerged have an inherent notion of a “black box.” There are many inputs going into the box, and then out of it comes the actual decision or recommendation. However, when people try to unpack the box and figure out its logic, it becomes a major challenge. This can be tough in regulated markets, which require companies to disclose and explain the reasoning behind specific decisions. Further, the lack of explainable AI can affect the change management needed throughout the company to make AI implementations succeed. If people cannot trace an answer to an originating dataset or document, it can become a hard value proposition to staff. Implementing AI with traceability is a way to address this challenge. For example, commercial banks manage risk in an online portfolio. A bank may lend money to 5,000 small- to medium-sized businesses. It will monitor their health in balance sheets within the portfolio of loans. These sheets may be in different languages or have different accounting standards.



Blockchain as a Re-invention of the Business
The current model is very much authority-centric, which leaves a narrow space to individuals and small legal entities who are mere spectators or limited contributors. If we take into account the democratization of choice that boosts up the people’s motivation nowadays, we can come up to the conclusion that the current approach stands against the right to self-empowerment. Blockchain, a wonderful combination of mathematics and technology made it possible to distribute the power into the nations where it actually belongs. “With great power comes great responsibility," as the Marvel comics super-heroes use to say. This could be a motto of DLT. Blockchain is a natural service area that hooks up to the Internet as the connectivity layer. Business globalization and economic freedom are two main forces of a paramount significance underpinning the evolution of the distributed transactional platform. The central system played an absolutely vital role in times of corruption and global disorders or wars. In the current reality, people deserve to operate within a planetary technological network.


A Look at the Technology Behind Microsoft's AI Surge

Lambda architecture, while a general computing concept, is built into the design of Microsoft's IoT platform. The design pattern here focuses on managing large volumes of data by splitting it into two paths -- the speed path and the batch path. The speed path offers real-time querying and alerting, while the batch path is designed for larger data analysis. While not all AI scenarios use both of these paths, this is a very common edge computing pattern. At the speed layer, Azure offers two main options -- Microsoft's own Azure Stream Analytics offering and the open source Apache Kafka, which can be implemented using the HDInsight Hadoop as a Service (HDaaS) offering, or on customers' own virtual machines (VMs). Both Stream Analytics and Kafka offer their own streaming query engines (Steam Analytics' engine is based off of T-SQL). Additionally, Microsoft offers Azure IoT and Azure Event Hubs, which connect edge devices (such as sensors) to the rest of the architecture. IoT Hubs offer a more robust solution with better security; Event Hubs are specifically designed just for streaming Big Data from system to system.




U.S. regulators grappling with self-driving vehicle security
U.S. Transportation Secretary Elaine Chao said in San Francisco on Tuesday that “one thing is certain — the autonomous revolution is coming. And as government regulators, it is our responsibility to understand it and help prepare for it.” She said “experts believe AVs can self-report crashes and provide data that could improve response to emergency situations.” One issue is if self-driving vehicles should be required to be accessible to all disabled individuals, including the blind, the report noted. The Transportation Department is expected to release updated autonomous vehicle guidance later this summer that could address some of the issues raised during the meetings. Automakers, Waymo, a unit of Alphabet Inc, and other participants in the nascent autonomous vehicle industry have called for federal rules to avoid a patchwork of state regulation. However, the process of developing a federal legal framework for such vehicles is slow moving.


The rise of artificial intelligence DDoS attacks
ddos attack
The major turning point in the evolution of DDoS came with the automatic spreading of malware. Malware is a phrase you hear a lot of and is a term used to describe malicious software. The automatic spreading of malware represented the major route for automation and marked the first phase of fully automated DDoS attacks. Now, we could increase the distribution and schedule attacks without human intervention. Malware could automatically infect thousands of hosts and apply laterally movement techniques infecting one network segment to another. Moving from network segments is known as beacheading and malware could beachhead from one part of the world to another. There was still one drawback. And for the bad actor, it was a major drawback. The environment was still static, never dynamically changing signatures based on responses from the defense side. The botnets were not variable by behavior. They were ordered by the C&C servers to sleep and wake up with no mind for themselves. As I said, there is only so much bandwidth out there. So, these type of network attacks started to become less effective.



Automation could lift insurance revenue by $243 billion
First, explaining the vision clearly and securing leadership buy-in. “By establishing a clear and compelling vision, organizations demonstrate that intelligent automation is a strategic imperative and are able to answer critical questions,” the report says. Second, developing a clear pilot process. “The automation business case will need to assess the impact on transaction processing time and employee time saved and consider variables such as the volume of transactions or the number of exceptions in a specific process,” according to Capgemini. Firms should also consider starting with “low-hanging fruit” and engaging talent through hackathons and accelerators, the report says. Third, scaling up with an automation center of excellence. “To promote effective collaboration with the CoE, organizations should consider incentivizing functions based on business benefits derived from implementation of intelligent automation,” Capgemini suggests. Fourth, industrializing automation.


In-memory computing: enabling continuous learning for the digital enterprise
speech bubble constructed from abstract letters
Today’s in-memory computing platforms are deployed on a cluster of servers that can be on-premises, in the cloud, or in a hybrid environment. The platforms leverage the cluster’s total available memory and CPU power to accelerate data processing while providing horizontal scalability, high availability, and ACID transactions with distributed SQL. When implemented as an in-memory data grid, the platform can be easily inserted between the application and data layers of existing applications. In-memory databases are also available for new applications or when initiating a complete rearchitecting of an existing application. The in-memory computing platform also includes streaming analytics to manage the complexity around dataflow and event processing. This allows users to query active data without impacting transactional performance. This design also reduces infrastructure costs by eliminating the need to maintain separate OLTP and OLAP systems.



Hospital Diverts Ambulances Due to Ransomware Attack
The ransomware attack Monday impacted the enterprise IT infrastructure, including the electronic health records system, at Harrisonville, Mo.-based Cass Regional Medical Center, which includes 35 inpatient beds and several outpatient clinics, a spokeswoman tells Information Security Media Group. As of Wednesday morning, about 70 percent of Cass' affected systems were restored, she says. Except for diverting urgent stroke and trauma patients to other hospitals "out of precaution," Cass Regional has continued to provide inpatient and outpatient services for less urgent situations as it recovers from the attack, she says. "We've gone to our downtime processes," she says, which include resorting to the use of paper records while the hospital's Meditech EHR system is offline during the restoration and forensics investigation, she says. The hospital is working with an unnamed international computer forensics firm to decrypt data in its systems, she adds, declining to disclose the type of ransomware involved in the attack or whether the hospital paid a ransom to obtain a decryption key from extortionists.


Quote for the day:

"The mediocre leader tells. The good leader explains. The superior leader demonstrates. The great leader inspires." -- Gary Patton

No comments:

Post a Comment