The ability to understand everything that goes on within an environment, and then design and manage a network that can fully meet the needs of the enterprise, is reaching the point where it’s too complex for even a well-resourced team of engineers to achieve with certainty. The problem has become critical enough that without investing in intelligence in the datacentre, many businesses will face an increase in unplanned outages and expensive troubleshooting processes. Happily, there is a solution to this technological headache: Intent-Driven Networks. While the overall concept is new, Huawei’s Intent-Driven Network for CloudFabric Cloud Data Center Networking Solution is already available and handling some of the largest datacentre workloads. What makes Intent-Driven Networks innovative is the machine learning algorithms underpinning them. Machine learning has finally reached the point where it can help to understand how a network is being used. An artificial intelligence (AI) can then be part of the process to devise the right configuration for the network to achieve maximise availability and redundancy.
"You adopt virtualization, [and] a lot of your people don't need to care as much about their metal. You adopt infrastructure-as-a-service in the cloud, [and] you're not needing to worry about the hypervisors any more. You adopt a PaaS, and there are other things that essentially go away. All become 'smaller teams' problems. "You adopt serverless, and for developers to be successful in developing and architecting applications that work on these platforms," Kersten continued, "they also have to learn more of the operational burden. And it may be different to your traditional sysadmin who is racking and stacking hardware, and having to understand disk speed and things like that, but the idea that developers get to operate in a pure bubble and not actually think about the operational burden at all, is completely deluded. It just isn't how I'm seeing any of the successful serverless deployments work. The successful ones are developers who have some operational expertise, have some idea of what it's like to actually manage things in production, because they're still having to do things."
The infusion of cloud and software as a service (SaaS) technologies into enterprises has created complex hybrid information technology environments, each complicated by its own blend of tools and customizations. With legacy and next-generation cloud systems sitting side by side inside the enterprise, cloud-native technologies create a unified framework for these tools to work together and power the modern business. Do not confuse cloud-native with cloud computing; adopting cloud-native does not require the exclusive use of public cloud. Cloud-native is a way of thinking about and designing the components of software systems to optimize for distributed, cloud-based deployments. These deployments address increasingly urgent issues of scale and availability that enterprises of all sizes — not just the internet giants who pioneered the patterns and tooling associated with cloud-native — face. Cloud-native design consists of three component parts. Getting a piece of code that a developer writes (along with everything it depends on) deployed in production can be tough. Tools have emerged to help.
In making data machine-readable through XBRL, the ESEF directive will make the financial information of more than 5,000 companies in the European Union easily transferrable across technologies that natively process XML, such as NoSQL databases. In the UK, according to a white paper by the Financial Reporting Council, more than two million companies already report using Inline XBRL (iXBRL) to HMRC, while another two million file their accounts using iXBRL with Companies House. However, ESEF will require many more companies, including all listed companies, to file digital accounts with XBRL in the near future. This is a sign of things to come in the UK and across the globe. The Bank of Japan was among the early adopters, but more recently the Bank of England announced a Proof of Concept (PoC) project to explore how XBRL could help it to significantly reduce the cost of change, drive resource efficiencies and improve speed and flexibility of access to large quantities of regulatory data from financial institutions.
There are some important considerations to be made before starting a programme. These include operational and technical issues – such as securing the necessary equipment – as well as determining the resources and funding needed for newly formed teams. Firms must also ensure that existing teams are not left shorthanded and are still able to carry out their responsibilities. As with any team, the effectiveness of the CSIRT is greatly increased when it has a defined objective. When everyone within the team is clear on their role, it’s easier for them to pull in the same direction. Teams should be structured in a way that gives every member responsibility and accountability, but also defines who has the final say. During the planning phases it’s also essential to remove any areas of duplication. Re-doing activities and processes is a waste of resources and simply delays the time taken to reach the desired outcome. Companies can identify where overlaps and gaps exist by carrying out analysis on their current cyber response programmes.
From a technological perspective, an enterprise is only as agile as the network it operates on. As a cloud footprint expands, increasingly complex network policies that bind hybrid and multi-clouds together can significantly reduce a company's ability to pivot toward new technologies. Cloud orchestration and multiple cloud management platforms can be used to recapture business agility at the cloud networking level. Cloud orchestration can be thought of as the upper-level management layer that controls the various network automation building blocks that replaced manual tasks. Orchestration tools are used to develop intelligent business workflows that include various network requirements including application performance, network resiliency and security postures. Those policies can then be deployed throughout the entire cloud infrastructure. While cloud orchestration creates the foundation for end-to-end network control within a specific cloud platform, users are now seeking to gain the same orchestration benefits between two or more private and public cloud providers. This is where multiple cloud management platforms come into play.
Since GDPR also restricts cross-border data transfers, it’s important that networking teams understand the country of origin of any particular data, and how that data will traverse the organization’s networks, remaining mindful of which paths it will take and where it will be stored. To assure and keep track of this information, therefore, businesses will require full visibility across their entire network, including in the data centers and – now, more than ever - the cloud. This holistic visibility across the entire service delivery infrastructure – from the wireless Edge to the Core to the datacenter and into the Cloud – can be achieved by continuous end-to-end monitoring and analysis of the traffic data, or “wire-data”, flowing over the network. With GDPR compliance, and Article 32, not to mention much of modern business activity, reliant on the availability of effective, resilient and secure infrastructure, it’s important that the right approach is taken to service assurance. Analysis of this wire-data in real-time will enable IT teams to generate smart data which can provide the end-to-end service-level visibility and actionable insights they need to deliver this assurance.
One of the tools that I use to keep the organization focused is a structure called VSEM, for vision, strategy, execution and measurement. Vision is five years out; that's your vision for your entire IT organization. Strategy is two to four years out: what are your strategic initiatives, like moving to public cloud; there are five or six. All the projects and programs are under execution. M is measurement. Also, once a year my staff goes off site to talk about the scope, intent and mission that we're going to accomplish in the next 12 to 18 months. So, every year we come up with that intent and mission; that's the what. And from that we pick the technology that we need to work on; that's the how. ... It's joint ownership of objectives, so when I work on a project with India and the U.S., we have people in India and in the U.S. working on the same teams and the same initiatives with the same ultimate goal. That's what drives it. Some [other companies] will set up discreet centers of excellence, and that can work, but it can become islands.
“Unfortunately, there’s no magic eight ball when it comes to cyber security; it is a moving target. Just because something protected a business last year, does not mean it will keep the company safe this year,” he says. “Therefore, CIOs need to be particularly vigilant, carry out regular risk assessments of the business, and use this information to draw up a security plan that ensures there aren’t any vulnerabilities that can be exploited in the future.” The basis for this plan, he says, should be an understanding of the behavioural changes in people. “The best technological defences can be unwound by a social engineering attack, so it is important that employees are trained to be both the first and last lines of defence. Security plans should be reviewed regularly to try and stay one step ahead of threats as well as changes to technology used in the company.” Developing a disaster recovery plan takes significant time and effort. But Mike Osborne, founding partner of the Business Continuity Institute and executive chairman of Databarracks, says creating and implementing one for cyber security is particularly challenging.
Machines have made great contributions to the quality and accessibility of education, from massive open online courses (MOOCS) to teaching simulations to Khan Academy lessons. In commercial organizations, though, where teaching requires understanding the context of a person’s development within the organization, managers and coaches shine. For example, when Ben Horowitz was the director of product management at Netscape, he faced a problem: Many managers on his team felt overworked, yet their efforts did not translate into successful evangelism for the products they were in charge of. He wrote a short document titled Good Product Manager/Bad Product Manager and used it to train his team on his basic expectations. What happened next shocked him: “The performance of my team instantly improved. Product managers that I previously thought were hopeless became effective. Pretty soon, I was managing the highest-performing team in the company.”
Quote for the day:
"The level of morale is a good barometer of how each of your people is experiencing your leadership." -- Danny Cox