Daily Tech Digest - October 29, 2018

OpenStack Foundation releases software platform for edge computing

OpenStack Foundation releases software platform for edge computing
StarlingX is based on key technologies Wind River developed for its Titanium Cloud product. In May of this year, Intel, which owns Wind River, announced plans to turn over Titanium Cloud to OpenStack and deliver the StarlingX platform. StarlingX is controlled through RESTful APIs and has been integrated with a number of popular open-source projects, including OpenStack, Ceph, and Kubernetes. The software handles everything from hardware configuration down to host recovery and everything in between, such as configuration, host, service, and inventory management services, along with live migration of workloads. “When it comes to edge, the debates on applicable technologies are endless. And to give answers, it is crucial to be able to blend together and manage all the virtual machine (VM) and container-based workloads and bare-metal environments, which is exactly what you get from StarlingX,” wrote Glenn Seiler, vice president of product management and strategy at Wind River, in a blog post announcing StarlingX’s availbility.



3 best practices for improving and maintaining data quality

Poor data quality makes extensive impact on business including wrong product delivery, off the mark forecasts, inadequate planning, rework, poor customer experience and loss of reputation. Most of the factors affecting data quality are the defining elements such as accuracy, completeness and consistency. In the case of healthcare services, for example, inaccurate patient information and health records lead to adverse health outcomes. For retail business, inconsistency in the customer contact details not only creates delivery issues and customer complaints but also misses marketing opportunities. For all data, validity is always crucial. If data is not validated against the defined parameters such as format, range, and source, it is as good as absent. Depending on the urgency and critical nature of the operations, other factors specific to industries may become equally important. ... Finally, with no ambiguity, overlap or duplication, reliability of data across all sources is absolutely essential for high data quality.


British Airways data breach worse than thought


“It demonstrates that enterprises still do not have in place robust enough security to protect their back-end systems and databases, or the measures in place to identify these attacks in real time and cut them off as soon as abnormal activity is detected. “It is not beyond the means of organisations, especially those that process and manage such sensitive and critical information, to put in place tools that can analyse and detect threats or the exfiltration of data over a significant period of time.” This was especially important, said Carter, because it would then put the onus on affected customers to notify their financial services providers for any fraud they may become a victim of. LogRhythm vice-president and Europe, Middle East and Africa (Emea) managing director, Ross Brewer, added: “If I were BA, I would be very worried about the impact both breaches will have on the company’s reputation. The fact that both data breaches have taken place in the past six months is extremely worrying – and very embarrassing for the airline.


3 Keys to Reducing the Threat of Ransomware

Wouldn't it be more sensible to pay for a third-party review of security hygiene and posture, and bolster it wherever it's lacking, including penetration testing? Why rebuild? Maybe there was something wrong in the IT architecture, or the systems were outdated and needed replacement. Maybe the fear of something being left behind that might cause reinfection was too much to bear. We may never get the full story, but we do know the enormous cost of rebuilding these systems. As a CIO, I experienced numerous attempted ransomware attacks and several instances of server encryption, or attempted encryption, where we were able to take servers out of rotation. Fortunately, ransomware then was not what it is now, and though we were attacked our backups were not affected. Luck wasn't the only reason we were able to recover so quickly. We used good cyber hygiene and best practices to reduce the hacking threat. We also took snapshots of our infrastructure every 30 minutes, with full backups nightly. We always recovered with minimal data loss.


China has been 'hijacking the vital internet backbone of western countries'

china-telecom-bgp-hijack.jpg
The research duo says they've built "a route tracing system monitoring the BGP announcements and distinguishing patterns suggesting accidental or deliberate hijacking." Using this system, they tracked down long-lived BGP hijacks to the ten PoPs --eight in the US and two in Canada-- that China Telecom has been silently and slowly setting up in North America since the early 2000s. "Using these numerous PoPs, [China Telecom] has already relatively seamlessly hijacked the domestic US and cross-US traffic and redirected it to China over days, weeks, and months," researchers said. "While one may argue such attacks can always be explained by normal' BGP behavior, these, in particular, suggest malicious intent, precisely because of their unusual transit characteristics -namely the lengthened routes and the abnormal durations." In their paper, the duo lists several long-lived BGP hijacks that have hijacked traffic for a particular network, and have made it take a long detour through China Telecom's network in mainland China, before letting it reach its intended and final destination.


How to protect your organization from insider threats

Modern DLP solutions are intelligent data loss prevention systems, combining multiple disciplines including user activity monitoring, behavior analytics, and forensics in order to increase the effectiveness of a DLP implementation. These comprehensive DLP solutions allow for broader and more capable oversight to be implemented that can analyze user behavior, assign risk scores, and take action based on a complex set of user activities and data access. With human behavior-driven data loss prevention, organizations have emphasis on user activity monitoring and the ability to define and then dynamically update risk scores for different types of users. Leveraging machine learning and artificial intelligence to identify the anomalies, DLP can take action based on users’ behavior. Insider threats and DLP are a hot topic of conversation between at board meetings. This is a positive trend as it ensures visibility at the board level to the risks associated with insider threats and the urgency of a comprehensive DLP strategy to minimize data exfiltration risk. 


PoC Attack Leverages Microsoft Office and YouTube to Deliver Malware


According to a Cymulate analysis posted on Thursday, the team found that it’s possible to edit that HTML code to point to malware instead of the real YouTube video. “A file called ‘document.xml’ is a default XML file used by Word that you can extract and edit,” Avihai Ben-Yossef, CTO at Cymulate, explained to Threatpost. “The embedded video configuration will be available there, with a parameter called ’embeddedHtml’ and an iFrame for the YouTube video, which can be replaced with your own HTML.” In the PoC, the replacement HTML contains a Base64-encoded malware binary that opens the download manager for Internet Explorer, which installs the malware. The video will seem to be legitimate to the user, but the malware will unpack silently in the background. “Successful exploitation can allow any code execution – ransomware, a trojan,” Ben-Yossef said, adding that detection by antivirus would depend on the specific payload’s other evasion features. Obviously, the attack would work best with a zero-day payload.


Machine Learning Becomes Mainstream: How to Increase Your Competitive Advantage

Machine learning is a part of predictive analytics, and it is made up of deep learning and statistical/other machine learning. For deep learning, algorithms are applied that allow for multiple layers of learning more and more complex representations of data. For statistical/other machine learning, statistical algorithms and algorithms based on other techniques are applied to help machines estimate functions from learned examples. Essentially, machine learning allows computers to train by building a mathematical model based on one or more data sets. Then those computers are scored when they may make predictions based on the available data. So when should you apply machine learning? ... With the right machine learning strategy, the barriers to adoption are actually fairly low. And, when you consider the reduced TCO and increased efficiency throughout your business, you can see how the transition can pay for itself in very little time. As well, Intel is dedicated to establishing a developer and data science community to exchange thought leadership ideas across disciplines of advanced analytics.


1 threat intelligence feeds hand swiping tablet mobile device
The US NVD is slow; the media gap between a vulnerability becoming public and appearing on the list is seven days. China’s NVD is quicker to upload public vulnerabilities, but has been accused of altering data to hide government influences. The Russian NVD, run by the country’s Federal Service for Technical and Export Control of Russia, misses many vulnerabilities and is slow with what it does publish. Good threat intelligence is more than a list of vulnerabilities. Instead of relying on NVDs alone to power your vulnerability scanning, companies should look to other sources to supplement their threat intelligence operations. According to a study by Tenable, over a third of vulnerabilities have a working exploit available on the same day of disclosure, giving hackers days or more of unfettered opportunity to attack. By broadening the scope of your intelligence gathering, you can close the window of opportunity for cybercriminals and gain a richer set of data with which to defend yourself.


Services are everywhere, if we only have the lens to see them. Regrettably, we often notice them only when they are dissatisfying. Not long ago, I “discovered” an internal service in my organization: my team created a presentation to give to leadership, so we wanted it to look polished. Unfortunately, none of us had visual-design chops, so we requested someone from our design team to help. The reply was “Is there a due date?”. We didn’t have a deadline (yet), but we also had no idea when our understandably busy colleagues would be able to turn it around. This is clearly a (design) service for internal customers who have an idea of what makes it fit for their purpose. In this case, it was a reliable turnaround time. We all make requests of individuals and teams all the time. But without a mutual exchange of information -- for example, expected delivery speed -- we’re going to pad our requests with extra time or fake deadlines. 



Quote for the day:


"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer