For any business, decisions about what actions need to be taken from a security perspective should be based on risk, as opposed to an ad hoc approach to prioritizing fixes. For example, TLS 1.0, a web cryptography protocol, has a vulnerability allowing it to be exploited by the POODLE attack. That being said, it is not considered a critical exposure for most organizations. The PCI Security Standards Council for instance, is not requiring the removal of TLS 1.0 for existing installations until June, 2016. Were I assessing risks for an organization, this would probably not be the top item on my list. When using a risk-based approach to vulnerability management, the challenge is in properly assessing the business risk of a given vulnerability. This is where a CISO with knowledge of the business side as well as the technology side comes in.
Ransomware is obviously analogous to kidnapping, and dealing with the perpetrators can feel much like negotiating with a jumper standing on the edge of high-rise roof.The Institute for Critical Infrastructure Technology recently released a report that in part describes how to deal with criminals when they are holding your data hostage. The report talks of what to do once a breach has been found. ICIT says the proper response will depend on the risk tolerance of the organization, the potential impact of the hostage data, the impact on business continuity, whether a redundant system is available, and regulatory requirements.
The single most common mistake users of public cloud make is to not read their contracts and understand where their responsibilities truly lie. Often people are unclear as to when and how the creation of a server in the cloud moves from the care and security of the provider to them. I’ve run into folks who mistakenly thought their cloud provider was patching servers through some back door for them. They weren’t; and the servers went unpatched for months. Often organisations will forget that the layer of management given to them by the cloud provider will also need some security. The administrative users and rights used to configure and control the cloud systems will need to be treated just as carefully as any other privileged users in their systems.
Variety speaks to how open source used to be confined to software – programming that could be improved or adjusted to fit different business needs – but has now evolved into hardware IP, like specs, servers, and data center designs. Volume speaks to the amount of open source content that's available, which has grown astronomically in the past few years. Major growth in volume is largely due to the fact that open source IP isn't just created by individuals anymore – it's created by huge corporations, too. Open source also must be viewed in terms of velocity, or how quickly it develops everyday use-cases. Duet says that open source is now fully permeated in technology, and points to the rise of the Internet of Things – made possible by the ability to analyze disparate data sets on a massive scale – as a triumph of open source philosophy.
If moving to public cloud or hosted services seems intimidating by adding another factor into the mix amid or replacing on-premise infrastructure, a paradigm shift is in order – necessary to stay competitive and lean in a world shifting to accommodate more outsourced options and the agility found in the cloud. Complicating things further, those now making the move are presented with options – options that improve the quality of cloud overall, but create an initial dilemma as leaders oscillate between service providers and products, debating which areas of their business to migrate when and where. Meanwhile, more and more applications build upon on another within increasingly complex and intricately interdependent environments.
Some high-profile ISPs were not pleased after the FCC proposed rules (pdf) to give broadband consumers more privacy. To dispute the notion that ISPs are “somehow uniquely positioned in the Internet ecosystem,” AT&T wants you read Georgia Institute of Technology professor Peter Swire’s paper titled “Online Privacy and ISPs: ISP Access to Consumer Data is Limited and Often Less than Access by Others.” Although Swire’s paper may be used to assist the FCC as it decides how to handle broadband privacy, the same paper was criticized for technical inaccuracies by Princeton professor Nick Feamster before Feamster revised his statement to say Swire’s paper skips over “important additional facts that should be considered by policymakers.”
The project management field spans 10 interconnected knowledge areas and incorporates the use of 47 processes organized into five process groups (initiating, planning, executing, monitoring and controlling and close-out) -- making it a complex field to understand and navigate. As project management is applied within small businesses to large multi-national organizations and to virtually any industry in some form, anyone from the CEO of a large international organization to employees within a small business can benefit from understanding these PM terms. Since project management involves careful planning, execution and management of people, processes, timelines, deliverables, technologies and other resources in a way that aligns with overall strategic objectives, successfully executing a project, can be almost impossible absent the understanding of these PM terms.
The unwillingness of manufacturers to address security issues, he said, is illustrated by Trane, which was alerted to serious security flaws in its ComfortLink II thermostat in April 2014, including hard-coded SSH passwords, and yet this particular issue was only fixed a year later, and the company took a further eight months to address the remaining vulnerabilities. “When [Trane] eventually did fix the vulnerabilities it did not alert customers, so this is a classic example of the problems people are facing, where they have these devices, they don’t know they are insecure, and they are not made aware when is a software update to make them secure,” said Alexander. He also pointed out consumers should be aware that there is money to be made from data, and that electronics manufacturers have found a way to make consumers pay to put devices in their homes that will give the device makers data that will make them money.
If your customers in Berlin are experiencing performance problems with your service it could be an issue at a local ISP or CDN you are using. It could be a more general problem in Berlin. It could be a lot of things. You can then use Dyn's information on where the problems actually are to direct your traffic through alternatives until the trouble passes. Obviously such problems occur all the time, some from mistakes, some from equipment failure, some from malicious action like a DDOS. In all cases, the first action to take is to route around the problem. Very often, existing services and practices use geolocation and hops as a proxy for latency in order to determine best route. But what if you actually had the latency numbers?
Quote for the day:
“Presence emerges when we feel personally powerful, which allows us to be acutely attuned...” -- Amy Cuddy