An analytics firm called ExtraHop examined the networks of its customers and found that their security and analytic software was quietly uploading information to servers outside of the customer's network. The company issued a report and warning last week. ExtraHop deliberately chose not to name names in its four examples of enterprise security tools that were sending out data without warning the customer or user. A spokesperson for the company told me via email, “ExtraHop wants the focus of the report to be the trend, which we have observed on multiple occasions and find alarming. Focusing on a specific group would detract from the broader point that this important issue requires more attention from enterprises.” ... In every case, ExtraHop provided evidence that the software was transmitting data offsite. In one case, a company noticed that approximately every 30 minutes, a network-connected device was sending UDP traffic out to a known bad IP address. The device in question was a Chinese-made security camera that was phoning home to a known malicious IP address with ties to China.
Continuous Testing is commonly lumped together with “shift left.” However, to deliver the right feedback to the right stakeholder at the right time, Continuous Testing needs to occur throughout the software delivery lifecycle – and even beyond that to production (e.g., monitoring information from production and feeding that back from the quality perspective). Just as the name indicates, Continuous Testing involves testing continuously. Simply starting and finishing testing earlier is not, by definition, Continuous Testing. How do you reach this level of continuous quality and Continuous Testing? The path forward is different for every team. Some might focus on automating traditionally manual processes while others might wrestle with orchestrating and correlating all the various test automation tools they’ve come to master. The challenge is getting to the point where you can report on whether an overarching application or project involving all these different teams – with different cadences, architectures, tool stacks, structures, and challenges – has an acceptable level of risk.
Setting Dmarc policies to “reject” is the only guaranteed way of preventing email spoofing, which has long been blamed for fraud victims being duped by social engineering techniques. Opting to set to set the policy to “none” will merely alert the domain owner of potentially suspicious activity, but will warn not the recipient of fraudulent emails. Setting the policy to “quarantine” also notifies the domain owner and potentially offers some protection by sending the email to “spam” or “junk” folders, but the result depends on the delivery policy of the email provider and therefore does not provide guaranteed protection. This means in the run up to the announcement of A-level results on 15 August 2019 and immediately thereafter, the majority of those communicating with universities about course placements could be targeted by fraudsters with emails that appear to come from universities.
The GAO undertook the study to not only determine to what extent the agencies had instituted key elements of a risk management program, but also to find out what challenges these agencies were facing in putting those elements in place. The study also reviewed steps the Office of Management and Budget and the Department of Homeland Security have taken to address their risk management responsibilities. Investigators found was that while all but one agency - the General Services Administration - had installed a cybersecurity risk executive, 16 agencies had not fully established a cybersecurity risk management strategy that outlined boundaries for risk-based decisions. "The risks to IT systems supporting the federal government and the nation's critical infrastructure are increasing as security threats continue to evolve and become more sophisticated," according to the GAO report. "These risks include insider threats from witting or unwitting employees, escalating and emerging threats from around the globe, steady advances in the sophistication of attack technology, and the emergence of new and more destructive attacks. ..."
To prepare data you need to hold an analytic purpose in mind, however tentatively formed. Otherwise you’re not even experimenting or exploring, you’re just playing. Equally, to analyze data is investigate not only its aggregations and patterns, but its structure too. And the more you learn about the structure of data, the more you might tweak it, reshape it, indeed wrangle it to reveal more patterns. Whether you are comparing start and end dates of a process to analyze an elapsed time, or arranging demographic data sets into appropriate age-groups to find useful correlations, or simply concatenating name fields to create a more useful identifier, the distinction between analyzing and wrangling is a weak one; especially so with self-service technologies, because rather than being a cumbersome exchange of requirements between business and IT, this new, empowered analysis typically happens on the desktop of one savvy, and satisfied, business user.
After a lack of resources, respondents cited a lack of experience as their top challenge (37%), followed by a lack of skills (31%). Ultimately, security professionals feel their budgets are not giving them what they need, the survey report said, with only 11% saying security budgets were rising in line with, or ahead of, the cyber security threat level, while the majority (52%) said budgets were rising, but not fast enough. Asked about the source of cyber security threats, 75% said people are the biggest challenge they face in cyber security, followed by processes (12%) and technology (13%). This may explain the need for more resources even as budgets increase, the report said, noting that the people issue is a far more complex one to deal with. Yet at the same time, the report said there are signs of improvement, with more than 60% of IT professionals saying that the profession is getting better – or much better – at dealing with security incidents when they occur, and only 7% saying the profession is getting worse.
The vulnerabilities affect all devices running VxWorks version 6.5 and later with the exception of VxWorks 7, issued July 19, which patches the flaws. That means the attack windows may have been open for more than 13 years. Armis Labs said that affected devices included SCADA controllers, patient monitors, MRI machines, VOIP phones and even network firewalls, specifying that users in the medical and industrial fields should be particularly quick about patching the software. Thanks to remote-code-execution vulnerabilities, unpatched devices can be compromised by a maliciously crafted IP packet that doesn’t need device-specific tailoring, and every vulnerable device on a given network can be targeted more or less simultaneously. The Armis researchers said that, because the most severe of the issues targets “esoteric parts of the TCP/IP stack that are almost never used by legitimate applications,” specific rules for the open source Snort security framework can be imposed to detect exploits. VxWorks, which has been in use since the 1980s, is a popular real-time OS, used in industrial, medical and many other applications that require extremely low latency and response time.
Scrum is a rhythmic planning method. It is opposed to the traditional batch type approach considering that the construction of a computer system requires to have first completed its analysis before proceeding to the development then the tests. This very cumbersome and costly approach has left many projects deadlocked. On the contrary, Scrum breaks this model by cutting out the construction of the product in small batches called sprints. During a sprint, the team analyzes, develops and tests what the client considers most valuable to him. A sprint lasts between 1 to 4 weeks. At the end of the sprint, during the sprint review, an increment of the product is presented to the customer who can thus quickly provide his feedback. The team corrects and adapts the product sprint after sprint and according to customer. Gradually the product takes shape. In addition to this ongoing adaptation to customer needs, Scrum provides a formal structure for improving team practices by introducing the notion of retrospective. It's a special moment at the end of the sprint during which the team looks back on its practices to improve them in the next sprint.
“This shift to product-centric delivery models entails co-locating CDOs into business units and strives for constant improvement rather than siloed project metrics,” continued Faria. Bill Swanton, distinguished research VP at Gartner, added that this shift towards product-centric application models didn’t come about randomly. It goes hand-in-hand with the adoption of agile development methodologies and DevOps. “Business leaders are generally unhappy with the speed with which they get application improvements and how they work. Given that no IT organisation gets anywhere near enough funding to do everything everyone wants when they want it, product-centric approaches allow faster delivery of the most important capabilities needed,” he said. According to Gartner, in a product line management model, product lines are funded based on the business capabilities they support. Common or shared capabilities — such as infrastructure, technology, D&A — are funded based on the anticipated and aggregated needs of the product lines they support.
Quote for the day:
"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford