Daily Tech Digest - November 17, 2020

SD-WAN needs a dose of AIOps to deliver automation

In some ways, SD-WAN exacerbates the troubleshooting problem. It adds a level of resiliency to the network via multi-path networking that can hide outages. This leads to a situation where the network operations dashboard can show everything is "green," but apps are performing poorly. Network performance issues have become glaringly obvious with the rise of video, and they are causing network engineers to constantly scramble to try and remediate issues. Here is where AI can make a difference. AI systems can ingest the massive amounts of data provided by network infrastructure (LAN, WLAN and WAN) to "see" things that even the savviest network engineer can't see. At one time, when networks were fairly simple and traffic volumes were lower, it was possible for a seasoned network professional to "know" a network and quickly find the root of problems through a combination of domain knowledge and rapid inspection of traffic. But not so today as the numbers of devices, applications and volume of information have skyrocketed. One of the big changes is that periodic polling data has been replaced by real-time streaming telemetry that increases data by an order of magnitude or more.


Ripe for digital disruption: Which industries are most at risk and why

The changing demographics favor workers who are much more open to gig work and who place greater trust in digital platforms to create marketplaces. This has opened the door to changes in typically cohesive industries, such as higher education. The increased demand for digital skills has led many students to decouple academic interest and professional credentialing. This will lead to an exodus from costlier schools in favor of boutique schools that cater to narrower interests. Students will earn digital credentials from specific, technology-heavy institutions like Lambda School in their early career, and pursue further growth and learning throughout their career from organizations such as Coursera or LinkedIn Learning. Generation Z has grown up with democratized value creation, like YouTube channels or Twitch streamers that organically found their base and built their audience using digital techniques. These new, digital entities can see the most valuable part of a business process and align themselves to those while sourcing out the other aspects with great velocity. Tesla, for example, has done away with its PR department and is relying on its outspoken CEO to directly message the market.


The seven elements of successful DDoS defence

Because multiple computers from a globally dispersed botnet “zombie army” of hijacked internet-connected devices are attempting to flood a server with fake traffic to knock it offline, DDoS attacks are already more destructive than Denial of Service (DoS) attacks perpetrated from one machine. However, in recent years we’ve monitored a disturbing trend: DDoS used as a smokescreen. The service disruption draws the IT team’s attention away from a separate and more sophisticated incursion, such as account takeover or phishing. The damage of just the DDoS can be bad enough. It takes a targeted website minutes to go down in a strike, but hours to recover. In fact, 91% of organisations have experienced downtime from a DDoS attack, with each hour of downtime costing an average of $300,000. Beyond the revenue loss, DDoS can erode customer trust, force businesses to spend large amounts in compensations, and cause long-term reputational damage; particularly if it leads to other breaches. ... A comprehensive defence is essential, but with attacks ranging from massive volumetric bombardments to sophisticated and persistent application layer threats, what are the most important elements of potential solutions to consider?


Breakdown of a Break-in: A Manufacturer's Ransomware Response

At the 2020 (ISC)² Security Congress, SCADAfence CEO Elad Ben-Meir took the virtual stage to share details of a targeted industrial ransomware attack against a large European manufacturer earlier this year. His discussion of how the attacker broke in, the collection of forensic evidence, and the incident response process offered valuable lessons to an audience of security practitioners. The firm learned of this attack late at night when several critical services stopped functioning or froze altogether. Its local IT team found ransom notes on multiple network devices and initially wanted to pay the attackers; however, after the adversaries raised their price, the company contacted SCADAfence's incident response team. ... Before it arrived on-site, the incident response team instructed the manufacturer to contain the threat to a specific area of the network and prevent the spread of infection, minimize or eliminate downtime of unaffected systems, and keep the evidence in an uncontaminated state. "The initial idea was to try to understand where this was coming from, what machines were infected and what machines those machines were connected to, and if there was the ability to propagate additionally from there," said Ben-Meir in his talk.


Sustainability: The growing issue of supply chain disruption

There is likely to be more disruption ahead as extreme weather events appear to be on the rise. According to McKinsey, climate disruptions to supply chains are going to become increasingly frequent and more severe. Kern said: “It’s a mathematical effect that the number of natural catastrophes has been increasing massively in recent years. If you look at Hurricanes Katrina, Harvey, Irma and Maria as well as the Japanese earthquake and the Thai floods you can see that we are getting loss events far above the previous average of around $50bn. We’re seeing nat cats causing losses up to $150bn of insured value, so as you can imagine this is a very big concern for us.” Baumann pointed out that as well as more extreme weather, other future trends could play a role. He said: “There are several drivers of disruption. The complexity of supply chains is increasing, and more complexity means more potential points of failure. Even simple goods can have as many as ten suppliers. That in turn adds to the risk that transportation and production may be disrupted.” At the same time, practices such as just-in-time delivery or lean manufacturing can also introduce risks, particularly when organisations are focused purely on reducing costs.


Figuring out programming for the cloud

The trick, says Rosoff, is to give the programmer enough of a language to express the authorization rule, but not so much freedom that they can break the entire application if they have a bug. How does one determine which language to use? Rosoff offers three decision criteria: Does the language allow me to express the complete breadth of programs I need to write? (In the case of authorization, does it let me express all of my authZ rules?); Is the language concise? (Is it fewer lines of code and easier to read and understand than the YAML equivalent?); Is the language safe? (Does it stop the programmer from introducing defects, even intentionally?). We still have a ways to go to make declarative languages the easy and obvious answer to infrastructure-as-code programming. One reason developers turn to imperative languages is that they have huge ecosystems built up around them with documentation, tooling, and more. Thus it’s easier to start with imperative languages, even if they’re not ideal for expressing authorization configurations in IaC. We also still have work to do to make the declarative languages themselves approachable for newbies. This is one reason Polar, for example, tries to borrow imperative syntax.


A Cloud-Native Architecture for a Digital Enterprise

Cloud-native applications are all about dynamism, and microservice architecture (MSA) is critical to accomplish this goal. MSA helps to divide and conquer by deploying smaller services focusing on well-defined scopes. These smaller services need to integrate with different software as a service (SaaS) endpoints, legacy applications, and other microservices to deliver business functionalities. While microservices expose their capabilities as simple APIs, ideally, consumers should access these as integrated, composite APIs to align with business requirements. A combination of API-led integration platform and cloud-native technologies helps to provide secured, managed, observed, and monetized APIs that are critical for a digital enterprise. The infrastructure and orchestration layers represent the same functionality that we discussed in the cloud-native reference architecture. Cloud Foundry, Mesos, Nomad, Kubernetes, Istio, Linkerd, and OpenPaaS are some examples of current industry-leading container orchestration platforms. Knative, AWS Lambda, Azure Functions, Google Functions, and Oracle Functions are a few examples of functions as a service platform (FaaS).


New streaming and digital media rules by Indian government rattles industry

So, what exactly does rule this portend? It's not entirely clear. To some who earn their bread and butter monitoring these industries, the prognosis is dire. Nikhil Pahwa, a digital rights activist and founder of prominent website MediaNama that writes about these industries said this to the Guardian: "The fear is that with the Ministry of Information and Broadcasting -- essentially India's Ministry of Truth -- now in a position to regulate online news and entertainment, we will see a greater exercise of government control and censorship." If this becomes reality it would wreck the plans of companies such as Netflix and Amazon that have seen their fortunes rise dramatically in the last few years with the spectacular boom of smartphones and cheap data, both goldmines that keep on giving. The COVID era has only added more fuel to this trend. Eager to capitalise on this nascent market, Netflix has already pumped $400 million into the country and amassed 2.5 million precious subscribers. Consulting outfit PwC predicts that India's media and entertainment industry will grow at a brisk 10.1% clip annually to reach $2.9 billion by 2024. 


Executive Perspective: Privacy Ops Meets DataOps

PrivacyOps is emerging because privacy considerations can no longer be an afterthought in an organization’s software development lifecycle -- they need to be tightly integrated. There is pressure on organizations to prove they are taking responsibility for personal data and acting in compliance with regulations, and it’s only going to increase. The real opportunity that the emergence of PrivacyOps presents is bringing security and privacy processes together, and standardizing best practices that need to be implemented across organizations. It’s far too easy for engineering, analytics, and compliance teams to talk over each other. Bringing these domains together through software will help to set expectations across the industry about privatizing data assets. Techniques such as k-anonymization, for example, are practiced by some of the best teams in healthcare, but they are hardly commonplace, despite being relatively easy to implement. To deliver compliant analytics, you need data engineers that can reliably ship the data from place to place, while implementing the appropriate transformations. However, what actually needs to be done is often not very clear to the engineering team. Data scientists want as much data as possible; compliance teams are pushing to minimize the data footprint. Regulations are in flux and imprecise.


2021 predictions for the Everywhere Enterprise

While people will eventually return to the office, they won’t do so full-time, and they won’t return in droves. This shift will close the circle on a long trend that has been building since the mid-2000s: the dissolution of the network perimeter. The network and the devices that defined its perimeter will become even less special from a cybersecurity standpoint. ... Happy, productive workers are even more important during a pandemic. Especially as on average, employees are working three hours longer since the pandemic started, disrupting the work-life balance. It’s up to employers to focus on the user experience and make workers’ lives as easy as possible. When the COVID-19 lockdown began, companies coped by expanding their remote VPN usage. That got them through the immediate crisis, but it was far from ideal. On-premises VPN appliances suffered a capacity crunch as they struggled to scale, creating performance issues, and users found themselves dealing with cumbersome VPN clients and log-ins. It worked for a few months, but as employees settle in to continue working from home in 2021, IT departments must concentrate on building a better remote user experience.



Quote for the day:

"At first dreams seem impossible, then improbable, then inevitable." -- Christopher Reeve

No comments:

Post a Comment