The researchers used an internet scanning engine called Censys to identify public FTP servers that allow anonymous access with write privileges. They found 7,263 such servers and determined that 5,137 of them had been contaminated with Mal/Miner-C. Another interesting discovery was that many of those FTP servers were running on Seagate Central NAS devices. While this malware threat does not specifically target such devices, it turns out that Seagate Central's configuration makes it easier for users to expose insecure FTP servers to the Internet. By default, the Seagate Central NAS system provides a public folder for sharing data, the Sophos researchers said in a paper published Friday.
When you split up your systems landscape into small components, testing is required at a lot of different levels. First of all, inside your components unit tests will likely cover the component’s internals. Next, the service interface needs to be tested whether it produces the right output or document, such as JSON objects or PDF’s. Next applications or other components consuming services offered by components need to test whether these components still offer the right output. As the services in a microservices architecture are loosely coupled, usually via REST and JSON over HTTP, a change to a service interface is not always picked up immediately by the teams. Having automated tests run at every change of a component, the pipeline signals breaking changes as fast as possible.
Some people tell me we should focus on flexibility and forget about control. Others say we have to control information and forget about flexibility. I say you can and should have both. The only real way to maintain an acceptable level of control is to also offer your workforce flexibility. This is more important than ever in this age of working beyond corporate walls and firewalls. You may have the best information management system and internal governance on the planet, but if you don’t accommodate distributed and mobile staff, you will lose control. People will find a way around your firewall if you don’t provide it for them. Basement email servers, unauthorized cloud drives, and personal smartphones, oh my! To safely work beyond corporate firewalls, follow the “cloud first, Web first, mobile first” principles of solution design for flexibility and control
Thus, an all-in-one Unified Communications-as-a-Service (UCaaS) platform is pretty appealing. An extension of Unified Communications (UC), UCaaS wraps a host of business communication and collaboration applications and services into a single experience delivered via the cloud. That encompasses everything from enterprise social messaging and chat apps to online videoconferencing and meeting software to business voice-over-IP (VoIP) services. UCaaS platforms come in all shapes and sizes. There are different types of cloud and on-premises distributions, complicated security protocols, and vendors on all sides of the market—from VoIP and telecom providers to major cloud and enterprise players—pushing their own solution. Here are a few key considerations to help your CISO choose the UCaaS platform that best suits your organization.
Think of this as some sort of security microcosm, where as we go from cellular to molecular level, there is the need to drive security deep into the data center, so that it becomes deeply embedded in a system that is analyzing the activity of every packet and application traversing the network. The central nature of the SDN paradigm make this a better security model, in general. Rather than managing security policies on individual devices or proprietary hardware system, a centralized SDN controller could analyze and supervise security across an entire data center. Pursuing a zero-trust, stateful security model – in which all applications are monitored in real-time — can provide enhanced security for east-west traffic within the data center, implemented closest to VMs and containers.
For security researchers, the fact that malware authors include abusive messages in their code comes as an acknowledgement of their work. Thus, researchers will continue to report on new and updated malware, regardless of whether developers are dissatisfied with how their malware is portrayed or are unhappy that they made it to the headline. “We believe it’s crucial to inform Internet users, whether home users or people involved in companies, of emerging cyber threats. It’s not only about building awareness, but it’s also an essential tool to help people learn how to get protected,” Andra Zaharia said. “We believe that spreading correct and relevant information about new and improved malware is an important part of helping people become more aware of the issue and its potential impact.”
The key to making the most of any operational or service-based enterprise IoT application is to act on data at its peak point of value: the moment it is created. If your vehicle operating system learns about an imminent traffic hazard, you’ll need it to notify the driver immediately, not minutes later. That means enterprises will need to increase the velocity of data processing. The traditional process, where data is collected in one place, and then processed and analyzed in multiple separate phases, slows things down and is not a sustainable model for the volume and velocity of IoT-produced data. Instead, data refinement, processing and analysis must move closer to the connected device. Sensor manufacturers are already developing compute capabilities that happen on the device, breaking analytics out of the data center and putting it into the real world.
When you work for a smaller organization, you don’t have the luxury of a 24/7 SOC. In my company, we compensate by building automation into the monitoring of our logs and cherry-picking events that will generate email notifications. Other events get our attention when we can carve out time to monitor the threat logs generated by our advanced firewalls and the security logs produced by a multitude of other devices: web and database servers, load balancers, proxies, file integrity monitoring software, etc. We collect the logs in a centralized server, and a few filters help identify logs that meet certain criteria. I and a couple of analysts take turns monitoring the filtered logs. We don’t get 24/7 coverage, but it’s pretty close.
In a peculiar and rare phenomenon, the loud noise created by inert gas being released during a planned test of fire extinguisher systems not only forced the bank's main data center in Bucharest, Romania, offline, but also managed to destroy dozens of hard drives in the process, causing serious and irrevocable damage. Inergen is a kind of fire extinguishing system which relies on gas rather than traditional foam or liquid. Suitable for enclosed spaces, Inergen, stored in cylinders as compressed gas, is dispersed through hoses and nozzles evenly across a small space to wipe out fires. Usually, this kind of fire protection would be best suited for data centers -- especially as foam and liquid would damage valuable and delicate equipment -- but in this case, something went horribly wrong.
Automation gives you a lot of confidence that things are working. It empowers your QA testers to go off and do the most valuable tasks. We've been using TestPlant [eggPlant Functional] to automate routine tasks, for example. This frees up time for exploratory testing or destructive testing, where testers sit down and really pull the product apart, and try and do weird and wonderful things with it. A lot of QA teams spend way too much time doing regression tests, and that's where automation really does help. Without automation, your scope of your testing is incomplete, because you're only going to do regression testing for what you can remember, which is probably three or four sprints back. And then, things drop off.
Quote for the day:
"Trust because you are willing to accept the risk, not because it’s safe or certain.” -- Anonymous