One of the biggest problems with data security is that so much of our computing these days takes place not in a physical mainframe, but in the cloud. It used to be that data thieves might have to break into a physical space to steal hard drives or mainframes in order to steal data. Not any more. With more and more computing of all kind taking place in the cloud, that data can become extremely vulnerable. In fact, the movement of data between parties, through the cloud, is its most vulnerable point, and with the growing use of multi-cloud environments, the problem is only exacerbated. Imagine a single piece of information that must be transmitted from Company A to Company B. Company A knows that its servers are secure, and Company B feels like its data is also secure. But what about the “space” in between? Data passports allow the data to carry its own encryption with it, so that even if it is intercepted, it’s useless. This is extremely valuable for companies and industries that transmit data in multi-cloud environments, and will be especially useful in highly regulated industries like banking and insurance.
The pace and scale of technological disruption—with its risks of unemployment and growing income inequality—are as much a social and political challenge as a business one. Nonetheless, employers are best placed to be in the vanguard of change and make positive societal impact—for example, by upgrading the capabilities of their employees and equipping them with new skills. And employers themselves stand to reap the greatest benefit if they can successfully transform the workforce in this way. Many leading businesses are realizing that they cannot hire all the new skills they need. The better solution is to look internally and develop talent they already have, as this approach is often not only quicker and more financially prudent but also good for morale and the company’s long-term attractiveness to potential recruits. We already know from our executive surveys that most leaders see talent as the largest barrier to the successful implementation of new strategies—notably, those driven by digitization and automation.
IT teams should invest in a number of tactics to optimise performance. However, the number one challenge they face is their ability, or inability, to keep the application running and resolve problems due to extremely thin teams. Visibility solves this critical problem. An overview into and across the entire application and IT infrastructure is paramount in keeping applications running to reduce MTTD (Mean Time To Detection) and MTTR (Mean Time To Resolution). Teams will have a better understanding of their current resources and scale appropriately. For example, they may discover that they have excessive server resources assigned to their application, over and above those necessary to safely run the application. Plus, they will have visibility into how cloud resources are performing (how utilised they are, are they running at the proper amount of disc space, memory, etc.) and can easily see what is and is not being used. Teams will be able to benefit from higher morale, key insights, and increased overall ownership.
The good news is that robust cybersecurity measures will ward off a state attack — and your garden variety cybercriminal. “Attacks, whether from criminal or nation-state actors, largely use the same techniques. An organization’s continual vigilance to implement and maintain cybersecurity best practices is critical,” advises Cotton. Cotton suggests that small or medium company or organizations, incorporate a “Red Team” exercise to identify employees who need additional protection or training, lest they become a spear-phishing target. Likewise, increased oversight of activities logs for such individuals would help. “When targeting critical management or operations employees of either a larger nation-state target or even their sub-contractors, the use of a smaller unconnected organization might be an easier way to infect a spear-phishing target’s home computer. Then the attack would move across the corporate VPN to the actual target of the attack,” details Cotton. He adds that some of these smaller seemingly unconnected organizations might be a local library or health-care system. Using that criteria, it’s understandable why MSPs should be concerned about state-sponsored attacks.
Cloud busting is a deployment topology in which the regular traffic is directed to an on-premise deployment by a load balancer. With increasing load, new instances are spun up in the public cloud after the traffic crosses a particular threshold and additional traffic is directed there. This model is primarily used for cost optimization. A common scenario is to provision additional infrastructure in the public cloud to handle seasonal spikes and scale back or dismantle the same after the traffic returns to normal. This often turns out to be a cheaper option compared to maintaining the same infrastructure on-premise which remains unused during relatively longer periods of regular traffic. ... Systems running in organizations' data centers experience unplanned downtimes due to various reasons often causing loss to their business. To mitigate this they plan different levels of disaster recovery strategies depending on the criticality of the system/application. Setting up a disaster recovery site requires building and operating an offsite data center with its associated costs, which often looks like unnecessary overhead.
Gartner analysts suggest D&A leaders pilot blockchain smart contracts now and companies should start deploying to automate a simple process, such as non-sensitive data distribution or a simple contract formation for contract performance or management purposes. D&A leaders must immediately respond to data challenges by using cost-benefit analysis or programs won't mature enough to influence enterprise-level business strategy. D&A leaders would apply master data management (MDM) disciplines, and data-quality metrics to improve process efficiency, and driver overall higher ROI from D&A strategies. Adopt emerging tech, i.e. machine learning (ML), blockchain, smart contracts, and graph tech as a cost-effective means to increase data value and drive efficient decision-making. The report warns that if D&A management doesn't move to increase data challenges with a net positive business value proposition, or influence stagnates, then neither the company or enterprise-level D&A strategies can succeed.
Imagine you’re currently working in a NOC (Network Operations Center), where people have to monitor multiple dashboards. They have to often escalate it to L2/L3 workers to troubleshoot incidents, make war room calls regarding root cause and then actually take the manual steps to remedy them. What if you could take some of that manual work off the NOC workers and give them a digital colleague to assist? The digital colleague can reduce the number of errors, find root cause, and file service tickets and wherever possible automate the incident resolution. Not only would it reduce operations costs, but it also frees up these workers to focus on more meaningful endeavors. That’s the concept of the digital colleague. This holds true for ITSM, ESM or any function in need of assistance. A digital colleague for Service Desk will converse with end users in natural language through multiple channels, understand the user intent, map it into service catalog, ask clarifying questions and automate resolution of up to 50% of service requests, thus taking load off operations staff.
The goal of reinforcement learning is to train a machine learning algorithm to achieve a goal by outputting a particular sequence of outputs for a given sequence of output. The rule that the machine learning algorithm uses to map inputs to outputs is called the policy. It is the goal of the machine learning algorithm to randomly explore the solution space until it finds a policy that allows it to achieve it’s intended goal. This can require the algorithm to run for much longer than it would do if the algorithm in use was supervised. Reinforcement learning is also being explored in the industrial robotics sector to try to assist industrial machines to handle industrial goods. Handling and moving an industrial goods usually involves a large number of individual movements from an industrial robotic arm. The movements are very difficult to pre-program using convectional programming techniques because of the large number of individual sequential movements required. Research on robotics powered by reinforcement learning is now being seriously explored. Other emerging cutting edge applications of reinforcement learning includes in the allocation or subdivision of computing resources between many different industrial machines.
Creating a framework for managing risk that can be understood across the organization, even by non-cybersecurity professionals. It doesn’t need to be a comprehensive measurement of all risks, but it should use risk indicators that are representative of the main risk areas so as to provide both an overall barometer of cybersecurity risk and to ensure its kept as part of the business conversation. Making sure cyber is part of the dialogue at the highest levels of the organization. If the CEO talks about phishing awareness, there’s a good chance this will become a priority at all levels. Creating a security instruction and awareness function and appointing a senior leader responsible for running security awareness campaigns and overseeing security training. This executive should be empowered to work with colleagues across various business functions to design programmes that address the needs of different employee specialities.
Emergency services and policing will also be impacted. With cities growing at the rate of 1.5 million new citizens every day, 5G services will allow police forces to better monitor the environment of the city through automation, in order to provide more efficient services, safety to the public and cities in line with environmental targets. Further to this, autonomous vehicles, high footfall areas, carbon emission levels, safety and new road and pedestrian planning will all benefit from enhanced monitoring services thanks to 5G. Lastly, there are the environmental use cases to consider. Power supply and lighting will change as a result of 5G, making the lighting of cities and distribution of energy through smart grid systems more efficient. Telecoms providers are essential to this. For a start, they can assist with connecting those who are generating energy back into the grid, which is vital for the two-way purchase and sale of energy to succeed. Substations will need higher capacity and faster connections — provided by fibre — in order to facilitate this flow.
Quote for the day:
"Leadership has a harder job to do than just choose sides. It must bring sides together." -- Jesse Jackson