Industrial control systems (ICS) are everywhere. These systems play a critical role in nearly every industry around the world, including electric, water and wastewater, oil and natural gas, and transportation, as the smart technology of today and tomorrow is driven by these systems. This same widespread use and importance of ICS, especially those found in critical infrastructure, also makes them a primary target for bad actors, and the increasing use of the internet is only serving to magnify the potential for issues. According to Industrial Control Systems Vulnerabilities Statistics from Kaspersky, there were only two ICS vulnerabilities detailed in 1997 (the first year this information was recorded); however, these vulnerabilities are now much more commonplace, with 189 reported in 2015.
Edge computing is set to grow dramatically in 2018 and beyond. As IoT devices continue to come online in droves of tens of billions, edge data centers will grow in prevalence too, in order to collect, process and manage data when and where it’s being created. IT departments should expect to see tremendous growth and demand for reliable computing at the edge over the next few years. As edge data centers continue to play an increasingly important role in both the business and IT landscapes, we’ll see the standards for their level of predictability and uptime grow to match those that enterprises and consumers have come to expect from traditional, large data centers. So, how can you build an edge data center that’s reliable and generates value for your company? Here are five important ways IT departments should be building their edge data centers to help ensure end-to-end reliability and resiliency.
A good indication of whether a technology is in the plateau of productivity in Gartner’s hype cycle is when someone asks ”Is MongoDB dead?” on that bastion of, um, sane discussion, Quora. A second good indication is when there are productivity tools and at least a nascent third-party market around your technology. A third indication is when a third party creates an IDE for it: The growing third-party market is a key indication that MongoDB has moved from mere maturity to one of the dominant players in this market. Enter Studio 3T, a small European firm with its own sea mammal mascot and a reputation for being “the MongoDB GUI.” Its eponymous product is the successor to its MongoDB Chef. According to Studio 3T marketing chief Richard Collins, the company direction is as a full-fledged IDE for MongoDB. Studio 3T lets team collaborate on MongoDB charts across roles and skill levels, from the developer to the analyst to the DBA.
Botnets and automated attacks include distributed denial of service (DDoS) attacks, ransomware attacks, and computational propaganda campaigns, the report noted. “Traditional DDoS mitigation techniques, such as network providers building in excess capacity to absorb the effects of botnets, are designed to protect against botnets of an anticipated size,” report authors wrote. “With new botnets that capitalize on the sheer number of ‘Internet of Things’ (IoT) devices, DDoS attacks have grown in size to more than one terabit per second, outstripping expectations. As a result, recovery time from these types of attacks may be too slow, particularly when mission-critical services are involved.” Stakeholders in all industries must be willing to coordinate and collaborate together to combat these threats. Problems must be proactively addressed “to enhance the resilience of the future Internet and communications ecosystem.”
For smaller companies without a security team, this can help with patching and general security maintenance. However, it’s important to check the security measures they offer, especially in terms of security monitoring, distributed denial of service attack (DDoS) mitigation, their responsiveness to security incidents and their processes for dealing with incidents. If you need to host the website yourself, perhaps because you are delivering a web-based service that requires close coupling to your own systems, then you will need to make sure your own systems are protected and separated from the web server itself, to prevent the web server being used as a Trojan horse to attack your operational systems. Whichever approach you take, supply chain security also needs to be considered. There have been instances over the past few years where an attacker has attacked a website tool developer and modified the code of the tools used to build a website so that every site they build includes backdoors open to the attacker, or pre-placed malware.
Despite an influx of best-in-breed security technologies, organizations around the world are seeing a continued rise in cyber attacks. There are big implications. Financial consequences include immediate costs of investigating the breach and extend longer-term to include lawsuits and regulatory fines. Loss of customer trust can translate into declines in business. Perhaps most damaging is the impact of shutting down entire systems, which can grind operations to a halt. This is especially dangerous when the target is a critical healthcare, government, or utility provider. From the high-profile Equifax breach to payment compromises at hotel chains and retailers, security teams are increasingly under pressure to not only determine why this is happening but what can be done to fix or prevent these problems. For many companies, getting "back to basics" could be one of the most effective weapons in the war on cyberattacks.
ETA collects metadata about traffic flows using a modified version of NetFlow and searches for characteristics that indicate the traffic could be malicious. It inspects the initial data packet, which is translated in the clear, even in encrypted traffic. It also records the size, shape and sequence of packets, how long they take to traverse the network, and it monitors for other suspicious characteristics such as a self-signed certificate, or whether it has command-and-control identifiers on it. All of this data can be collected on traffic, even if its encrypted. “ETA uses network visibility and multi-layer machine learning to look for observable differences between benign and malware traffic,” Cisco explains in a blog post announcing ETA. If characteristics of malicious traffic are identified in any packets, they are flagged for further ianalysis through deep packet inspection and potential blocking by an existing security appliance like a firewall.
Specifically, the research revealed the top five security weaknesses were: code tampering (94% of apps), insecure authorization (59% of apps), reverse engineering (53% of apps), insecure data storage (47% of apps) and insecure communication (38% of apps). “The flaws we found were shocking, and are evidence that mobile applications are being developed and used without any thought to security,” said Bolshev. “It’s important to note that attackers don’t need to have physical access to the smartphone to leverage the vulnerabilities, and they don’t need to directly target ICS control applications either. If the smartphone users download a malicious application of any type on the device, that application can then attack the vulnerable application used for ICS software and hardware. What this results in is attackers using mobile apps to attack other apps.”
Tags are just text strings attached to devices and infrastructure as part of an ITSM asset management strategy. A tag is a key-value pair, seen as metadata. It assigns common attributes to assets, so they can be logically defined and grouped. A simple key-value pair example that would prove useful for ITSM asset management is the following: stack = production. In this example, the IT infrastructure team applies the stack tag to all production servers when they are built. When the administrator needs to perform system management, such as an update, he puts the tag productioninto an update query to restrict any operations to those servers tagged with production. Other key-value pair examples include owner = QAteam and location = LosAngeles. Using this set of tags in an update rollout, the global IT manager can apply changes only to servers owned by the quality assurance (QA) team and located in the Los Angeles data center.
In the near future, human workers and machines will work together seamlessly, each complementing the other’s efforts in a single loop of productivity. And, in turn, HR organizations will begin developing new strategies and tools for recruiting, managing, and training a hybrid human-machine workforce. Notwithstanding sky-is-falling predictions, robotics, cognitive, and artificial intelligence (AI) will probably not displace most human workers. Yes, these tools offer opportunities to automate some repetitive low-level tasks. Perhaps more importantly, intelligent automation solutions may be able to augment human performance by automating certain parts of a task, thus freeing individuals to focus on more “human” aspects that require empathic problem-solving abilities, social skills, and emotional intelligence.
Quote for the day:
"The struggle you're in today is developing the strength you need for tomorrow." -- Robert Tew