Social engineering has proven to be a very successful way for a criminal to "get inside" your organization. Once a social engineer has a trusted employee's password, he can simply log in and snoop around for sensitive data. With an access card or code in order to physically get inside a facility, the criminal can access data, steal assets or even harm people. In the article Anatomy of a Hack a penetration tester walks through how he used current events, public information available on social network sites, and a $4 Cisco shirt he purchased at a thrift store to prepare for his illegal entry. The shirt helped him convince building reception and other employees that he was a Cisco employee on a technical support visit. Once inside, he was able to give his other team members illegal entry as well. He also managed to drop several malware-laden USBs and hack into the company's network, all within sight of other employees. You don't need to go thrift store shopping to pull off a social engineering attack, though.
Being equipped with the experience of having been through it before can provide benefits not only for setting up systems to prevent damaging attacks, but the processes required if an organisation does fall victim to hackers. Rather than viewing staff who've worked at organisations that have suffered a cyberattack as having failed to do their job, other organisations should be actively seeking out these people to learn from them – even to the extent of hiring them for their own security teams. "Senior members of security staff who've worked in organisations which have had a major, publicised breach, that can be seen as a negative – somehow individuals can be tarnished with that. That's probably the exact opposite to the way to how the industry should be thinking," Darren Thomson, CTO EMEA at Symantec, told ZDNet. "Someone who has lived through one of these incidents and been through the whole process, recovering from the bad experience then implementing additional security and privacy measures: that knowledge and experience is valuable and it's good to have someone with it," he added.
With AWS OpsWorks, developers can deploy Puppet or Chef to manage declarative configurations within EC2 instances. Like CloudFormation, you can use OpsWorks to deploy AWS resources. However, OpsWorks automates the initial deployment of applications, as well as the ongoing changes to the operating system and application infrastructure. Both Puppet and Chef can also control the deployment of AWS infrastructure. You should use OpsWorks in place of CloudFormation if you need to deploy an application that requires updates to its EC2 instances. If your application uses a lot of AWS resources and services, including EC2, use a combination of CloudFormation and OpsWorks. IT teams can integrate CloudFormation with OpsWorks to configure newly deployed EC2 instances with Chef or Puppet, rather than simple shell scripting.
A large enterprise might have 2,000 applications. Some of those applications are cloud native and many of those may be actively worked on right now. For those applications, they may be choosing to manually instrument them to not only be functional but also observable. When they do that, they use open APIs. That data would go to some other tool. Then they’ve got some applications they don’t have time to instrument and they want to see in production -- they drop our agent in. They go to New Relic for some of their stuff and go to other tooling for other needs. Now they can have it all in one place. “That open telemetry is a big change for us. For people familiar with New Relic it’s a new way to look at us. The second part of that is with all of this telemetry data coming into one place, we’ve believed for some time that dashboards are not enough. If you look at why people love our APM [application performance management] product, for example, it’s more than a dashboard. It’s an interactive application that understands the telemetry data we collect and presents it to our customers in a useful way.
Putting analytics, servers and storage together at the edge to process data from the cameras and IoT sensors on the equipment eliminates the need “to send command and control to the cloud or a centralized data center,” which can take 40 milliseconds to get from one spot to another, Pugh says. “That’s too long to interpret the data and then do something about it without impacting production.” That type of decision-making needs to happen in real time, he says. Edge computing can be taxing on an IT department, though, with resources distributed across sites. In SugarCreek’s case, six manufacturing plants span the Midwest U.S. SugarCreek plans to move from its internally managed Lenovo edge-computing infrastructure to the recently launched VMware Cloud on Dell EMC managed service. SugarCreek beta-tested the service for Dell EMC and VMware when it was code-named Project Dimension. SugarCreek already uses edge computing for local access to file and print services and Microsoft Active Directory; to store video from indoor and outdoor surveillance cameras; and to aggregate temperature and humidity sensors to assess how well a machine is running.
"Organizations worldwide are realizing the need to invest in employee training and deploy different security awareness training solutions with the hope of mitigating the risk of data breaches," Gian said. "The problem is that many organizations settle for dated phishing simulation solutions that train employees randomly and require manual effort to operate. The outcome is disappointing, employee behavior doesn't change and information security teams remain powerless and frustrated in the face of successful phishing attacks. Effective training should not become an IT and financial burden, but be done autonomously, via data science driven methodology that offers each employee a customized, continuous training every single month and significantly changes employee behavior, hence mitigates organizational risk of cyber-attacks. "Just like the right technology," Osterman said, "such as firewalls or endpoint detection and response solutions, can protect an organization's data and financial assets from theft or destruction, so can the right employee training."
AI is a broad category that can include supervised and unsupervised machine learning, neural networks and reinforcement learning. "The key to knowing which of these tools to use is predicated on a detailed understanding of the problem you are trying to solve and the types of data -- structured, semi-structured, unstructured -- with which one has to work," Schmarzo explained. A good data scientist, he noted, is like a skilled carpenter in that both will use the best combinations of tools to solve the problem at hand. AI may not be new, but AI at scale within complex organizations is still in its early stages. "We still do not yet understand every consequence of integrating AI into larger systems," Gallego said. "Organizations should be ready to take on this risk and should be mature enough to understand the consequences and tradeoffs." Heineken noted that all big data projects, regardless of the approach used, have three basic failure points: understanding the question that needs to be answered, the data architecture and its availability, and having the ability to land insights into a business workflow at scale. Effectively addressing these issues "are all critical success factors," he advised.
According to the indictment unsealed at the time, Shalon was the mastermind of the whole operation, which prosecutors dubbed “hacking as a business model.” Shalon was the owner of US-based Bitcoin exchange Coin.mx, which he operated with Orenstein. Both are Israelis. With the help of Aaron, an American, the group allegedly bought up the type of penny stocks so often used in pump-and-dump scams. Then, using the customer data allegedly stolen from JPMorgan, Dow Jones, Scottrade and others, they blasted out emails to dupe the financial organizations’ customers and subscribers into buying the junk. It worked like a charm: they allegedly pocketed $2m from one deal alone. Prosecutors said the scheme generated “tens of millions of dollars in unlawful proceeds.” According to Monday’s indictment, Tyurin took his marching orders from Shalon. The New York Times reports that Tyurin’s lawyer, Florian Miedel, said in a statement that his client was “hired by the originators and brains of the scheme to infiltrate vulnerable computer systems at their direction.”
Amidst threats that are looming large, it is important to guard against descending into a spiral of pessimism and hate. Finding the objective middle ground between abandonment of technology and resigning to a total surrender of privacy for instant benefit is the need of the hour. And that begins with the acknowledgement of all the advantages leveraged so far. To put things in perspective, it is necessary to ask three questions integral to this global dilemma. One, where does the buck stop in regards to data security? Two. What is the role of the user in protecting his data and privacy, while continuing to integrate the digital advantage in routine tasks? Is it possible to overcome the trust deficit that is growing by the day? Before looking at the answers, let’s shed light on the evolution of the smart-world that we claim to inhabit. From the days of barter system to paying bills and having food delivered to your doorstep, we have come a long way indeed.
Few organizations currently manage IT and OT with the same staff and tools. After all, these networks evolved with a different set of priorities and they operate in inherently different environments. Nevertheless, in order to address this new complex threat and to protect this broader attack surface, many industrial organizations have begun to converge their IT and OT groups. The ‘convergence initiative’ is anything but simple. The growing pains associated with bringing together these two substantially different worlds can prove to be a challenge. The IT/OT convergence trend is not only driving integration of IT tools with OT solutions, it also requires alignment of the strategic goals, collaboration and training; and bridging between two departments with people that have different backgrounds, different mindsets and concerns for their departments. In general, IT people are used to working with the latest and greatest hardware and software, including the best security available out there to protect their networks. They tend to spend time patching, upgrading and replacing systems.
Quote for the day:
"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell