Daily Tech Digest - August 05, 2018

Enterprise Infrastructure Management Requires the Right Strategy for Success


The first step is to simplify the environment. Beyond just the four categories outlined above, IT organizational and cultural shifts need to change. Streamlining the environment takes planning, time and effort. Look for solutions and approaches that further simplify the environment. At the same time, consider how these changes impact your processes and organizational structure. Not all of the changes will be based in technology. As the demands of your customers change, so will your organization and processes. Look for opportunities to address technical debt and remove old or un-needed processes. These two steps alone go a long way toward simplification. Part of simplification includes the introduction of automation. In the past, organizations faced the fact that they had to do everything themselves. This was partly due to a lack of mature and sophisticated solutions along with the ability to add more people to resolve issues. Today, that approach simply is no longer feasible. Humans cannot keep up with the rate of change. Solutions are far more mature and sophisticated than those of the past.



AI, Machine Learning, and the Basics of Predictive Analytics for Process Management

There are certain machine learning applications where you can achieve a high accuracy. If you’re doing image processing, and you use deep learning – a type of machine learning – and it’s trying to identify, “Is this a picture of a cat or is it a picture of a dog?” It turns out that just like humans, computers can also do that very well if given the right training data and you apply the right machine learning methods. There are also things out there that, regardless of how advanced the machine is, or how intelligent the human is, neither the machine nor the human can make accurate predictions about exactly which customer is going to cancel. But what you can do is draw the trends and assign probabilities. That’s the job of the predictive model: to assign probabilities of who is more or less likely to show whatever outcome or behavior you’re trying to predict. So you determine what would be helpful to predict, and then you find out, “I can’t predict accurately, but wow, I can predict a lot better than guessing.” Probably, in many cases, better than any human could because of all of this data at the computer’s disposal.


Cybersecurity stocks savaged for a second week as Symantec results disappoint


Symantec shares finished the week down 7.1% at $19.25, after a 7.8% decline Friday. Of the 29 analysts who cover Symantec, two have buy ratings on the stock, 25 have hold ratings, and two have sell ratings. Following earnings, analysts’ average share-price target fell to $21.05 from $23.36, according to FactSet data. Cowen analyst Gregg Moskowitz, who has an underperform rating on the stock, called it “another highly disappointing quarter” for Symantec. Jefferies analyst John DiFucci, who has a hold rating, said the company faces notable challenges in its enterprise business, namely its SEP 14 endpoint protection product and the Blue Coat Secure Web Gateway business. In a note, DiFucci said “in endpoint, the company faces a multitude of private upstarts as competitors offering modern solutions that are competitive with SEP 14. Similarly, in the Secure Web Gateway market, the company continues to face direct competition from companies such as Zscaler and iboss, and indirect competition from the next-generation firewall vendors offering URL filtering functionalities that are considered ‘good enough’ to meet the needs of some enterprises.”


GDPR: What's really changed so far?

While some users will have chosen to give their consent, many will have withdrawn it and others may not have been able to explicitly give it as emails were lost in old in-boxes or junk mail folders -- for organisations, that led to the same result as opting out. "The opt-in environment can only have reduced business volume in the activity of direct marketing -- it can't have made it go up, it can only make it go down," said Stewart Room, lead partner for GDPR and data protection at PwC. "What it has done is it's increased awareness. There was more outreach done on data protection in the months of May and June 2018 in Europe than has ever been done in the entirety of the world in the history of data protection," said Room. While there's a focus on organisations like Facebook and Google which are well known for using data as a product for generating revenue, they're far from the only ones which have been hit by GDPR.


‘Moneyball’ing data – A closer look at how churn and propensity models work


So how does a propensity to buy model work? Similar to the churn model, it looks at past behavior, attributes, demographics, sales data, etc. of the best customers in your training data that you want more of. For example, there is a set of thousand customers that are your real cash cows and spend $1000+ on your merchandise every month. This becomes the protagonist that you are going to refer to and compare the rest of your training data set with. Let’s say that one of the patterns that the model detected was that majority of the customers that bought $1k+ merchandise were loyal to one specific brand in your store. This purchase pattern becomes a base for you to start marketing to others that have bought that specific brand but are in the $700 per month bucket. (What do you market to them? Look at the basket of the $1k+). This is just one example. Propensity models can slice and dice your data to look at attributes, behavior, and patterns that might be so counterintuitive that a human can never see a connection between them.


The impact of cloud migration strategies on security and governance

Most of these decisions are about governance and risk management. With lift and shift, the application functionality is pretty clear, but bringing that out to the cloud introduces data risks and technical risks. Data controls may be insufficient, and the application’s architecture may not be a good match for cloud, leading to poor performance and high cost. One group of SaaS applications stems from ‘shadow IT’. The people that adopt them typically pay little attention to existing risk management policies. These can also add useless complexity to the application landscape. The governance challenges for these are obvious: consolidate and make them more compliant with company policies. Another group of SaaS applications is the reincarnation of the ‘enterprise software package’. Think ERP, CRM or HR applications. These are typically run as a corporate project, with all its change management issues, except that you don’t have to run it yourself.


Oracle vs. Hadoop


Image title
Despite sophisticated caching techniques, the biggest bottleneck for most Business Intelligence applications is still the ability to fetch data from disk into memory for processing. This limits both the system processing and it’s ability to scale — to quickly grow to deal with increasing data volumes. As there’s a single server, it also needs expensive redundant hardware to guarantee availability. This will include dual redundant power supplies, network connections and disk mirroring which, on very large platforms can make this an expensive system to build and maintain. Compare this with the Hadoop Distributed Architecture below. In this solution, the user executes SQL queries against a cluster of commodity servers, and the entire process is run in parallel. As effort is distributed across several machines, the disk bottleneck is less of an issue, and as data volumes grow, the solution can be extended with additional servers to hundreds or even thousands of nodes. Hadoop has automatic recovery built in such that if one server becomes unavailable, the work is automatically redistributed among the surviving nodes, which avoids the huge cost overhead of an expensive standby system.


10 Dark Web warning signs that your organization has been breached

In the wake of seemingly constant high profile breaches, organizations are taking precautions to protect against cyberattacks, including raising security budgets and educating employees. However, the cost of a breach can be enough to significantly harm a company's finances and reputation: The average total cost of a data breach is $3.86 million, according to a recent Ponemon Institute report. The ongoing risk of attack has led some organizations to seek new ways to proactively monitor the Dark Web for lost or stolen data, according to a Wednesday report from Terbium Labs. ... Dark Web and clear web sites like Pastebin are a dumping ground for personal, financial, and technical information with malicious intent, the report said. There is often a motivation behind these posts, such as political beliefs, hacktivism, vigilantism, or vandalism. For example, the executive of a wealth management firm was included in a large-scale dox as the result of their political contributions, the report noted.


Agile: Reflective Practice and Application


By focusing on your own local efficiency, which can lead to focusing on what is not needed, can mean at best doing nothing for the larger system and at worst making the larger system less efficient. The obsession with coding efficiency in particular kills a great many software products. I see teams actually proud of a growing pile of stories in need of testing, or a team dedicated to front-end UI, proud of having endless features complete against mocks and how the back-end teams can’t keep up. Sadly, these teams seem oblivious to the fact that they are not adding value to the system. Let me give an example that a friend of mine shared recently: My friend was baking some cakes for an event and needed to bake 10 cakes, but only had one oven. Her husband offered to help out and so she asked him to measure out the ingredients for the cakes in advance so that it would be quicker to get the next cake in the oven. When she came to get the ingredients for the cake, they were not ready, her husband had optimised for himself and not for the goal.


An IT operating model for the digital age

Consider a typical IT team – generally, all tech staff will sit in their own division, removed from the rest of the business because it is easier to track, manage and budget their work. What happens, then, if the head of customer experience has a request? It is unlikely that customer experience teams, which have different key performance indicators (KPIs), will have much interaction with IT. The result is two frustrated parties lacking a common language and unable to deliver innovation at the pace required by customers and the wider business.  The challenge is to reorganise team structures in a way that allows innovation to flourish. In the era of digital transformation 1.0, that meant a bolt-on or “bi-modal” approach to digital, essentially giving a dedicated team the resources and licence to operate at pace, while the rest of the business continued plodding along in a traditional environment. It is not a bad place to start to get digital initiatives prioritised, but the reality is that “digital” now impacts every transaction and every touchpoint.



Quote for the day:


"If you don't understand that you work for your mislabeled 'subordinates,' then you know nothing of leadership. You know only tyranny." -- Dee Hock


Daily Tech Digest - August 03, 2018

man on cliff network world logo edge computing circuitry city
Edge networking was only one of the areas of growing interest revealed in the study. Another hot technology is Intent-Based Networking (IBN) which basically employs automation, analytics, intelligent software and policies that let network administrators define what they want the network to do. Cisco, Juniper along with startups such as Apstra have made IBN technology a relatively new industry buzzword and the study proves that out: More than half of network professionals that we surveyed are familiar with intent-based networking (54%), and one-third of them work at companies with IT budgets of more than $1 billion. “It’s not surprising then that only 3% report adoption of an intent-based network and 8% are beginning to execute an intent-based networking strategy, including investing in SDN [software-defined networking], virtualization, machine learning, model-based APIs and security tools. A larger pool (38%) have not yet considered this strategy but plan to begin research in the next 12 months,” Network Worldwrote.



Ending the estrangement: Why the CIO and the CMO need to collaborate


Breaking down data silos and supporting a compelling customer experience are at the top of the list. Creating a single view of the customer and personalizing communications involves more than building out APIs to connect internal and external data sets. Customer identity management needs to be centralized, so that updates to one part of a profile are instantly and automatically reflected in all the databases in which that customer’s data is held. This is a major marketing requirement, but the execution is up to IT. Marketing also needs IT’s help to navigate new data privacy standards such as the EU’s General Data Protection Regulation (GDPR). ... It is up to IT to determine the right balance between personalization and privacy without compromising marketing’s effectiveness or breaking the law. Good models for this are the IT departments in highly regulated industries, such as financial services and healthcare, that have already succeeded at putting terabytes of customer data in the hands of their marketers, while remaining complaint with myriad regulations. In sum, marketing can no longer go it alone with its digital agenda.



Unit Testing With Mockito

TDD (Test Driven Development) is an effective way of developing the system by incrementally adding the code and writing tests. Unit tests can be written using the Mockito framework.  In this article, we explore a few areas of writing unit tests using the Mockito framework. We use Java for the code snippets here. Mocking is a way of producing dummy objects, operations, and results as if they were real scenarios. This means that it deals with no real database connections and no real server up and running. However, it mimics them so that the lines are also covered and expect the actual result. Thus, it can be compared with the expected result and asserted. Mockito is a framework that facilitates mocking in the tests. Mockito objects are kind of proxy objects that work on operations, servers, and database connections. We would be using dummy, fake, and stub wherever applicable for mocking. We will use JUnit with the Mockito framework for our unit tests.


Audi to test 5G use cases in car production


Audi and Ericsson believe that many potential characteristics of the emerging 5G standard – notably its ability to run faster, low-latency, high-capacity, highly secure mobile networks – lend themselves to complex, automated production environments such as a car factory. In Germany, the trend towards digitisation of industrial production is known as Industry 4.0, and it is a key government initiative spearheaded by the German Ministry of Education and Research (BMBF) and the Ministry for Economic Affairs and Energy (BMWI) under chancellor Merkel. The first phase of the collaboration will see Audi and Ericsson working together on a latency-critical application using wireless production robots equipped with a gluing application for bodywork construction. Eventually, the Audi lab – which, in recent years, has explored big data in supply chain logistics and augmented reality in engine assembly, among other things – will be equipped with a 5G-enabled, simulated production environment that mirrors Audi’s real-life production line in nearby Ingolstadt.


5 Google Assistant tasks that will make your work life easier

Google Assistant is, without question, the most powerful and user-friendly virtual assistant on the market. Powered by AI, Assistant can help you with so many things: from answering questions to scheduling to making reservations to getting the latest weather from Mars—and even helping you with your bedtime routine. Yet for most users, the depth and breadth of what Assistant can do goes largely untapped. Why? Because it can do so much. With that in mind, I thought I'd share with you five tasks that can make your busy life a bit easier. Of course everyone's idea of "easier" varies, so I'm going to attempt to make these as broad and universal as possible ... at least within the realm of IT. Keep in mind, this is about making your life a bit easier, not more productive. Whether you're shopping for a family, your department, a client, a job, or yourself, it can be a daunting task to keep track of what you need to purchase ... especially when you're on the go. Driving to a client? The last thing you need is to pull out that phone to remind yourself to pick up Cat5 cable. Instead, use Google Assistant.


PSD2: Blessing or Curse for Banks?


PSD2 is a new European regulation that forces all banks in the European Union to open up their systems to outside players. Banks have to offer 3 APIs free of charges to all 3rd parties approved by the ECB: Accounts, Transactions and Payments. By forcing banks to open up a number of their core systems, the ECB hopes to stimulate innovation into the financial industry through an open API ecosystem. PSD2 brings a lot of challenges and investments for banks without any financial compensation in return: they have to invest in an API gateway, API security, modernizing some of their core systems to expose APIs, etc. and they have to offer the same performance for their APIs as for their existing banking app and website. So, as bank, you can definitely consider PSD2 as a legal obligation similar to GDPR. On the other hand you can also see it as a first step in opening up your core systems and becoming a digital player. If you were planning to strive for an Open API business strategy, than the PSD2 investments are anyway necessary. PSD2 forces you to offer the 3 APIs for free, but PSD2 also doesn’t prevent you from monetizing other APIs


Your hacked devices are being used for cyber crime says FBI

"Devices in developed nations are particularly attractive targets because they allow access to many business websites that block traffic from suspicious or foreign IP addresses. Cyber actors use the compromised device's IP address to engage in intrusion activities, making it difficult to filter regular traffic from malicious traffic," said the alert. IoT devices make easy targets for attackers because many are still shipped with poor security, often enabling attackers to gain access with the use of default username and passwords, or by using brute force attacks to guess passwords - and that's if the devices even have authentication processes in the first place. When security loopholes are uncovered in IoT devices, some vendors will push out firmware and software updates in order to prevent vulnerabilities being exploited - but given how large numbers of smart devices are connected to the internet then forgotten about, it's not guaranteed that users will apply the patches required to protect them from attacks.


How to identify a high-performing tech job candidate: 5 traits

If companies want high-performance employees, they must foster an environment where those individuals can flourish. "Regardless of what business you're in, if you want to improve a team it's critical that employees are engaged," said Cameron Smith, senior global director at Genesys. "Gallup's 2017 State of the American Workplace report found, 'Business or work units that score in the top quartile of their organization in employee engagement have nearly double the odds of success when compared with those in the bottom quartile.'" To fundamentally improve a team, supervisors and business executives need to step up. Employees can't be engaged in a company that isn't worth engaging in. "Instead, it's more of a two-way street, with companies playing a large role in fostering talent," said Smith. "The competence of an employee's supervisor, making sure appropriate workloads are assigned, and company culture all play a role in keeping staff performing at a high-level."


Manage APIs with connectivity-led strategy to cure data access woes


Once DevOps teams deliver microservices and APIs, they see the value of breaking down other IT problems into smaller, bite-size chunks. For example, they get a lot of help with change management, because one code change does not impact a massive, monolithic application. The code change just impacts, say, a few services that rely on a piece of data or a capability in a system. APIs make applications more composable. If I have an application that's broken down into 20 APIs, for example, I can use any one of those APIs to fill a feature or a need in any other application without impacting each other. You remove the dependencies between other applications that talk to these APIs. Overall, a strong API strategy allows software development to move faster, because you don't build from the ground up each time. Also, when developers publish APIs, they create an interesting culture dynamic of self-service.


Apache OpenWhisk vulnerability targets IBM Cloud Functions

According to PureSec's tests, an intruder could exploit the vulnerability to insert malicious code with the same permissions as the serverless function it replaced. Specifically, a remote attacker could overwrite the source code of a vulnerable function executed in a runtime container and influence subsequent executions of the same function in the same container. The attacker could then extract confidential customer data, such as passwords or credit card numbers; modify or delete data; mine cryptocurrencies; and more, Segal said. Other OpenWhisk-based serverless platforms, such as Adobe I/O, were not impacted by the vulnerability because a provider may opt not to use the runtime images provided by Apache OpenWhisk, said Rodric Rabbah, co-creator of OpenWhisk and recent co-founder of CASM, a stealth startup focused on serverless computing. OpenWhisk accepts user functions and then dynamically injects that code into Docker container images; a vendor can provide its own images, for example to provide a runtime that contains libraries that are important for their organization.



Quote for the day:


"Without deviation from the norm, progress is not possible." -- Frank Zappa


Daily Tech Digest - August 02, 2018

How You Can Bridge the IT Training Gap

Image: Shutterstock
"Organizations must ensure they’re creating opportunities for staff to get to know the business beyond just their department," noted Timothy Wenhold, chief innovation Officer at Power Home Remodeling, a national home remodeling firm. "When we onboard new hires, we have them spend two weeks shadowing every department, regardless of their level and years of experience," he said. "This gives the staff the direction needed to align their technical training goals so that they match the business’ needs." Ideally, there should always be a mix of different types of training. "The organization may want to carry out some type of assessment prior to the training to understand what areas should be addressed over others," suggested Ben Jordan, a security specialist with cybersecurity firm GreyCastle Security. "After trainings are completed, employees should be given the opportunity to give feedback about the training." "Whether IT training happens in a classroom, on the computer, on the job, or on your own — internally, externally or a mixture of both — all this matters less than ensuring that training is a reoccurring program and not a one-time, easily forgotten session," commented Thomas LaMonte, a senior analyst with tech research firm Gartner Digital Markets.



Mexico's fintech industry is on fire

Passed in early March 2018, Mexico’s fintech law received support from every major party, passing with 75 percent of the votes. And though it did place some restrictions on the space, the law was overwhelmingly supportive of the industry as a whole. The law even provided a loose definition of digital assets: “"...the representation of value registered electronically and used by the public as a means of payment for all types of legal acts and whose transfer can only be carried out through electronic means." This is important because it opened the door for fintech companies to utilize cryptocurrencies in remittance transactions, a space which accounted for over $28 billion coming into the country, representing 10 percent of Mexico’s total GDP growth in 2017. Before, these payments carried significant fees and could take days to process, but with the new law, citizens can now access these services through fintech institutions utilizing cryptocurrencies at a lower rate and with much faster processing times.


Microsoft rejiggers Windows 10 Enterprise subscriptions, pricing

p1240491 19
Changes to Windows 10 Enterprise were spelled out in some detail, even though new pricing was not disclosed. "For Windows, we're taking steps to recalibrate the price and rename the per device/per user offers, optimizing on our strategy of Microsoft 365," Microsoft wrote in an FAQ. "Part of this is about clarity," said Wes Miller, an analyst with Kirkland, Wash.-based Directions on Microsoft, talking about licensing. But he also said the changes, both in pricing and nomenclature, are further efforts by Microsoft to move customers to the licensing model where rights are tied to users, not to devices. Server-based desktops, for example, are only possible under Microsoft's per-user licensing, Miller pointed out. Windows 10 Enterprise E3 and Windows 10 Enterprise E5 debuted in 2016, when Microsoft began selling subscriptions to the operating system, specifically Windows 10 Enterprise, the operating system's top-tier version. Unlike Microsoft's legacy licensing - in which the operating system is licensed on a per-device basis - the E3 and E5 subscriptions are per-user. A licensed user could work at any of five allowed devices equipped with Windows 10 Enterprise.


DNS: Strengthening the Weakest Link

New specifications were defined in 2005 to address DNS’s lack of security. DNS Security Extensions (DNSSEC3) provides origin authentication, data integrity and authenticated denial of existence. However, the specifications do not address availability or confidentiality. The main goal of DNSSEC was to preclude DNS spoofing or DNS cache poisoning. DNSSEC adoption remains a long-term challenge and implementation has been slow. According to ISOC4, only about 0.5% of zones in .com are signed. That’s because when compared to DNS, DNSSEC is complex, introduces computation and communication overhead to DNS and requires significant infrastructure changes for organizations. IT organizations should make DNS infrastructure protection top of mind due to the absence of built-in security mechanisms in the DNS protocol. Specifically, DNS security requires rethinking perimeter security. Many organizations address DNS security by provisioning a DNS firewall and/or competent DNS servers, leaving the perimeter unattended.


How to identify and protect high-value data in the enterprise


The definition of high-value data is not one size fits all, as we all define our data differently. When considering what high-value data is versus what is just regular data, it is important to take a step back and use a holistic, risk-based approach; classify your data based on what can impact you the least to the most. Consider adding a few flavors to your data classification formula, such as the value of the data, the consequences of the loss or exposure of that data, the likelihood of occurrences and risks to enhance your data classification, and also ensure that you are measuring and defining your data on a consistent basis. Using the above approach and examples, take a deep breath and two steps back. Close your eyes and list a few data assets around you. Classify them inside your head, spin it a few times and then write them down. Make sure you are not trying to capture all of the data at once, as doing so can be a dangerous move and will probably overheat your brain; limiting your scope is the key.


How GDPR Could Turn Privileged Insiders into Bribery Targets

GDPR mandates hefty penalties for companies that are breached. Penalties can reach as high as 4% of a violators' annual revenue. (Remember, Google and Facebook are already facing $9 billion in fines). This means that in many cases, penalties will far outweigh the actual cost of a breach, which criminals know. Rather than auction stolen data to fellow crooks for pennies or try and exact a ransom to unencrypt it, criminals will start to ransom stolen data back to the organizations they heist it from in exchange for not exposing it publicly. The extortion price will be substantially higher than what could be earned on the Dark Web but significantly lower than an actual GDPR breach fine. Paying extortion may create an ethical dilemma for companies, but it will make smart business sense as it will be much lower than financial penalties Privileged insiders are central to this scenario. Cybercriminals will be motivated to bribe them, as holders of the kingdom's keys, into giving up their credentials. Once criminals have hold of these, they will have an opportunity to earn payouts way beyond anything ever seen in the past.


Why innovation requires transformational leadership

We must continuously build and challenge our assumptions at the same time and let our direction and momentum be dictated by that process. One which is informed by what we know about today and as far as we can predict tomorrow. Unlike traditional ‘strategy’ that creates a much better readiness to change direction when required rather than clinging onto what worked a couple of years ago. That brings us on to another important element of transformational leadership and that is that the change line is not and never will be set in stone. When you think about it, that makes sense right? Your ambition for tomorrow is based on your knowledge and ability today. As your abilities grow and develop, as new technologies come on stream and as customer demands change it is only natural that your future ambitions will modify based on today’s scenario. Here again those leaders who seek to develop problem-solving flexibility within their organisations are the ones which are more likely to come out on top. And if you’re not going to be flexible, well then watch out for the 74% of leaders who research suggests are looking to be disruptors in their own sectors.


5 Artificial Intelligence Business Lessons From The Masters


Both IBM’s Dinesh Nirmal and O’Reilly’s Ben Lorica said preparing data for mathematical models was the primary bottleneck for AI. Nirmal's keynote focused on operationalizing AI. Nirmal described how real world machine learning reveals assumptions embedded in business processes and in the models themselves that cause expensive and time-consuming misunderstandings. Data hygiene has been a critical failure point that has thwarted analytics efforts since the dawn of time. However, it’s an even bigger issue as companies look to incorporate lots of data from various internal and third-party databases. IBM talked about the need for preparing data but also having a structure for AI model management. In a meeting with Ben Lorica, he noted there's a role within the AI/data science discipline called data engineer that assists in preparing data for the data scientists to use in the algorithm training process. Even in 2018, we're still trying to eliminate the garbage in yield garbage out problem.


Feds Announce Arrests of 3 'FIN7' Cybercrime Gang Members

"FIN7 is one of the most sophisticated and aggressive malware schemes in recent times, consisting of dozens of talented hackers located overseas," the Justice Department says in a fact sheet. The scale of FIN7's operations has been significant. In the U.S. alone, FIN7 allegedly stole "more than 15 million customer card records from over 6,500 individual point-of-sale terminals at more than 3,600 separate business locations," the Justice Department says. Many businesses have sought to better secure their payment card systems and networks in light of large intrusions in recent years affecting T.J. Maxx, Target, Home Depot and many others. But their efforts have not been fully effective. Indeed, the U.S. continues to suffer a payment card breach epidemic centered not just on restaurants, but also retailers and hotels. The problem is compounded by the ease of procuring card-scraping malware, designed to infect POS systems, as well as backdoor exploitation tools - such as the Carbanak backdoor - from underground cybercrime forums.


Preventing the next digital black swan: The auditor, the CISO and the C-Suite

On the surface, digital black swans may seem unforeseeable, but if you dig a little deeper, you’ll generally discover that many of these incidents could have been prevented. For instance, in the Equifax breach, hackers exploited a vulnerability that was publicly disclosed two months prior to the attack. If Equifax had installed the patch in a timely manner, this breach would likely have been prevented. The key to preventing digital black swans is carefully putting critical controls in place. There are a number of controls that companies can use to reduce the odds of experiencing a major cyberattack. For example, Equifax suffered from faulty vulnerability management. The credit reporting company had ample time to install a routine security update that would have prevented the cyber incident. Poor security practices at Equifax were systemic. Shortly after the breach, it was revealed that one of the company’s online employee portals could be accessed using the default credentials of “admin” as both the username and password. This simple negligence put millions of Americans’ data at great risk.



Quote for the day:


"Take time to deliberate; but when the time for action arrives, stop thinking and go in." -- Andrew Jackson


Daily Tech Digest - August 01, 2018

What is WebAssembly? The next-generation web platform explained
WebAssembly, developed by the W3C, is in the words of its creators a “compilation target.” Developers don’t write WebAssembly directly; they write in the language of their choice, which is then compiled into WebAssembly bytecode. The bytecode is then run on the client—typically in a web browser—where it’s translated into native machine code and executed at high speed. WebAssembly code is meant to be faster to load, parse, and execute than JavaScript. When WebAssembly is used by a web browser, there is still the overhead of downloading the WASM module and setting it up, but all other things being equal WebAssembly runs faster. WebAssembly also provides a sandboxed execution model, based on the same security models that exist for JavaScript now. Right now, running WebAssembly in web browsers is the most common use case, but WebAssembly is intended to be more than a web-based solution. Eventually, as the WebAssembly spec shapes up and more features land in it, it may become useful in mobile apps, desktop apps, servers, and other execution environments.



Improving Testability of Java Microservices with Container Orchestration & Service Mesh


This article shows how container orchestration provides an abstraction over service instances and facilitates in replacing them with mock instances. On top of that, service meshes enable us to re-route traffic and inject faulty responses or delays to verify our services' resiliency. We will use a coffee shop example application that is deployed to a container orchestration and service mesh cluster. We have chosen Kubernetes and Istio as example environment technology. Let’s assume that we want to test the application’s behavior without considering other, external services. The application runs in the same way and is configured in the same way as in production, so that later on we can be sure that it will behave in exactly the same way. Our test cases will connect to the application by using its well-defined communication interfaces. External services, however, should not be part of the test scenario. In general, test cases should focus on a single object-under-test and mask out all the rest. Therefore, we substitute the external services with mock servers.


Three steps to improve data fidelity in enterprises


Data fidelity requires the contextual evaluation of data in terms of security. This means examining the data objects within the context of the environment in which they were created. In order to gather this data, you must not only re-examine what you deem important, but you must do so within the context of the tasks that you are attempting to support. The task support piece is critical because this bounds the problem space in which you can work. If the problem space is not bounded, all of the solutions will remain brittle point solutions that continue to fail when new problems are introduced. The ways systems can fail seems endless, but the ways systems can perform correctly are limited. This characteristic is key in any analysis that requires accurate predictions. Coincidentally, this same characteristic is oftentimes overlooked when attempting to accurately predict outcomes in the cyber domain. Three disciplines can assist in creating the boundaries and gathering the contextual data required to ensure data fidelity: dependency modeling, resiliency and reliability.


Key steps to success with self-service analytics

Gartner predicts that by 2020 the number of data and analytics experts in business units will grow three times the rate of those in IT units. With that in mind, isn’t creating a culture that values data absolute imperative? Creating a community of practice (COP) is not as simple as ‘training’ often sounds. Like Agile methods can quickly turn ‘tragile’ or ‘fragile’ if the team isn’t bought into the approach, self-service will fail if there isn’t a data driven culture that champions best practices. A COP uses training to first promote consumption for the business, and second build SMEs who will champion best practices for future builds. All areas of the enterprise are involved in creating this community: technical SMEs, novice developers and business consumers all interact during technical and tool agnostic sessions. To further growth and development across varying BI maturity, smaller break-out sessions are used to connect business units with similar use cases or audiences, so they can work together on their BI solutions. By creating a community of practice, you are fostering a culture that understands BI best practices and is encouraged to hone and develop new skills.


Which two companies will lead the enterprise Internet of Things?

Which two companies will lead the enterprise Internet of Things?
The biggest opportunities, the survey said, were in platforms supporting manufacturing and service applications. These enterprise IoT platforms, according to data and analytics firm GlobalData, “have become important enablers across a wide swathe of enterprise and industrial operations” by helping businesses become more productive, streamline their operations, and gain incremental revenues by connecting their devices and products to IoT devices sensors that collect a wide variety of environmental, usage, and performance data. The platforms are designed to help businesses collect, filter, and analyze data in a variety of applications that can help organizations make data-driven business, technology, and operational decisions. But which eIoT platforms are best positioned for to lead the “dynamic and highly competitive eIoT market? To find out, U.K.-based GlobalData conducted a “comprehensive analysis … with profiles, rankings, and comparisons of 11 of the top global platforms,” including Amazon, Cisco, GE, Google, HPE, Huawei, IBM, Microsoft, Oracle, PTC, and SAP.


AI can deliver 'faster better cheaper' cybersecurity

"We need to be able to make good cybersecurity services accessible to small and medium businesses, and consumers, and so we see a great opportunity in that regard," Ractliffe said. "Bluntly, we can see 'better faster cheaper' means of delivering cybersecurity through artificial intelligence and automation." Australia's defence scientists are also turning to AI techniques in the military's increasingly complex networked environment. "When we look at a system like a warship, it is now completely networked ... so that in itself creates a vulnerability," said Australia's Chief Defence Scientist Dr Alex Zelinsky at the Defence Science and Technology Group (DSTG). The internet is a "best effort" network. Malicious actors can slow down network traffic, or even divert it to where it can be monitored. This can happen in real time, and the challenge is how to detect that, and respond as quickly as possible. "I think that's where the AI elements come in," Zelinsky said. But one of the challenges of using AI in a protective system, or in the potential offensive systems that Zelinsky hinted that DSTG is working on, is explainability.


Digital trust: Security pros, business execs and consumers see it differently

digital trust today
“We are at a crossroads in the information age as more companies are being pulled into the spotlight for failing to protect the data they hold, so with this research, we sought to understand how consumers feel about putting data in organizations’ hands and how those organizations view their duty of care to protect that data,” said Jarad Carleton, industry principal, Cybersecurity at Frost & Sullivan. “What the survey found is that there is certainly a price to pay – whether you’re a consumer or you run a business that handles consumer data – when it comes to maintaining data privacy. Respect for consumer privacy must become an ethical pillar for any business that collects user data.” Responses to the survey showed that the Digital Trust Index for 2018 is 61 points out of 100, a score that indicates flagging faith from consumers surveyed in the ability or desire of organizations to fully protect user data. The index was calculated based on a number of different metrics that measure key factors around the concept of digital trust, including how willing consumers are to share personal data with organizations and how well they think organizations protect that data.


Disruption: The True Cost of an Industrial Cyber Security Incident

Disruption: The True Cost of an Industrial Cyber Security Incident
The IoT threat facing industrial control systems is expected to get worse. In late 2016, Gartner estimated that there would be 8.4 billion connected things worldwide in 2017. The global research company said there could be approximately 20.5 billion web-enabled devices by 2020. An increase of this magnitude would give attackers plenty of new opportunities to leverage vulnerable IoT devices against industrial control systems. Concern over flawed IoT devices is justified. Attackers can misuse those assets to target industrial environments, disrupt critical infrastructure and jeopardize public safety. Those threats notwithstanding, many professionals don’t feel that the digital threats confronting industrial control systems are significant. Others are overconfident in their abilities to spot a threat. For instance, Tripwire found in its 2016 Breach Detection Study that 60 percent of energy professionals were unsure how long it would take automated tools to discover configuration changes in their organizations’ endpoints or for vulnerability scanning systems to generate an alert.


How to evolve architecture with a reactive programming model


At the top level, the reactive model demands that enterprise architects think in terms of steps rather than flows. Each step is a task that is performed by a worker, an application component or a pairing of the two. Steps are invoked by a message and generate one or more responses. For example, a customer number has to be validated, meaning it's associated with an active account. This step might be a part of a customer order, an inquiry, a shipment or a payment. Historically, enterprise architects might consider this sequence to be a part of each of the application flows cited above. In the reactive programming model, it's essential to break out and identify the steps. Only after that should architects compose them into higher-level processes. It's difficult to work with line organizations to define steps because they tend to think more in terms of workers and roles, which dictated the flow models of the past. If you're dealing with strict, top-down EA, you'd derive steps by looking at the functional components of the traditional tasks, such as answering customer inquiries. 


How Contract Tests Improve the Quality of Your Distributed Systems


In order to fail fast and start getting immediate feedback from our application, we do test driven development and start with unit tests. That’s the best way to start sketching the architecture we’d like to achieve. We can test functionalities in isolation and get immediate response from those fragments. With unit tests, it’s much easier and faster to figure out the reason for a particular bug or malfunctioning. Are unit tests enough? Not really since nothing works in isolation. We need to integrate the unit-tested components and verify if they can work properly together. A good example is to assert whether a Spring context can be properly started and all required beans got registered. Let’s come back to the main problem – integration tests of communication between client and a server. Are we bound to use hand written HTTP / messaging stubs and coordinate any changes with their producers? Or are there better ways to solve this problem. Let’s take a look at a contract test and how they can help us.



Quote for the day:


"If you don’t like the road you’re walking, start paving another one." -- Dolly Parton


Daily Tech Digest - July 31, 2018

How disaster recovery can serve as a strategic tool

life preserver - personal floatation device
“You can count on us” is a popular business mantra. But what does that mean exactly? Consider this thought experiment: You and a competitor are hit with the same incident, but one of you gets back up more quickly. Fast recovery will give you a competitive advantage, if you can pay the price. “The smaller your RTO and RPO values are, the more your applications will cost to run,” says Google Cloud in a how-to discussion of DR. Any solution should also be well tested. “Your customers expect your systems to be online 24x7,” says Scott Woodgate, director, Microsoft Azure, in this press release. ... A solid DR plan can also facilitate transformational-based efficiencies. Let's say your leadership has business reasons for migrating to a new data center or transitioning to a hybrid cloud. Part of planning a migration is prepping for user experience and systems being down. If you are willing to use your DR assets during the transition, once the cloud or new physical sites are ready, you can fail back from DR, thus minimizing disruption. As an IT pro, you may not want to define these events as disasters, but business leaders prefer using existing resources to investing in swing gear.



The cybersecurity incident response team: the new vital business team

null
We live and do business in a world fraught with cyber risks. Every day, companies and consumers are targeted with attacks of varying sophistication, and it has become increasingly apparent that everyone is considered fair game. Organisations of all sizes and industries are falling victim, and the cyber risk is quickly becoming one of the most prevalent threats. When disruptions do occur from cyberattacks or other data incidents they not only have a direct financial impact, but an ongoing effect on reputation. For example, Carphone Warehouse fell victim to a cyberattack in 2015, which resulted in the compromising of data belonging to more than three million customers and 1,000 employees. While it suffered financial losses from the remedial costs, which included a £400,000 fine from the Information Commissioner’s Office (ICO), it also led to consumers questioning whether their data was truly secure with the retailer and if it was simply safer to shop elsewhere. That loss in consumer confidence is incredibly difficult to claw back, particularly at a time when grievances can be aired on social media and be shared hundreds or thousands of times.


Managing IoT resources with access control

The first place to start in establishing an effective IoT security strategy is by ensuring that you are able to see and track every device on the network. Issues from patching to monitoring to quarantining all start with establishing visibility from the moment a device touches the network. Access control technologies need to be able to automatically recognize IoT devices, determine if they have been compromised and then provide controlled access based on factors such as the type of device, whether or not it is user-based and, if so, the role of the user. And they need to be able to do this at digital speeds. Another access control factor to consider is location. Access control devices need to be able to determine whether an IoT device is connecting remotely and, if not, where in the network it is logging in from. Different access may be required depending on whether a device is connecting remotely, or even from the lobby, a conference room, a secured lab or a warehouse facility. Location-based access policies are especially relevant for organizations with branch offices or an SD-WAN system in place.


Artificial intelligence: Why a digital base is critical

The adoption of AI, we found, is part of a continuum, the latest stage of investment beyond core and advanced digital technologies. To understand the relationship between a company’s digital capabilities and its ability to deploy the new tools, we looked at the specific technologies at the heart of AI. Our model tested the extent to which underlying clusters of core digital technologies (cloud computing, mobile, and the web) and of more advanced technologies (big data and advanced analytics) affected the likelihood that a company would adopt AI. As Exhibit 1 shows, companies with a strong base in these core areas were statistically more likely to have adopted each of the AI tools—about 30 percent more likely when the two clusters of technologies are combined.5These companies presumably were better able to integrate AI with existing digital technologies, and that gave them a head start. This result is in keeping with what we have learned from our survey work. Seventy-five percent of the companies that adopted AI depended on knowledge gained from applying and mastering existing digital capabilities to do so.


The 5 Clustering Algorithms Data Scientists Need to Know

Clustering is a Machine Learning technique that involves the grouping of data points. Given a set of data points, we can use a clustering algorithm to classify each data point into a specific group. In theory, data points that are in the same group should have similar properties and/or features, while data points in different groups should have highly dissimilar properties and/or features. Clustering is a method of unsupervised learning and is a common technique for statistical data analysis used in many fields. In Data Science, we can use clustering analysis to gain some valuable insights from our data by seeing what groups the data points fall into when we apply a clustering algorithm. Today, we’re going to look at 5 popular clustering algorithms that data scientists need to know and their pros and cons! K-Means is probably the most well know clustering algorithm. It’s taught in a lot of introductory data science and machine learning classes. It’s easy to understand and implement in code! Check out the graphic below for an illustration.



Ransomware Attack Leads to Discovery of Lots More Malware

The investigation concluded the unauthorized persons would have had the ability to access all of the Blue Springs computer systems, the clinic notes. "However, at this time, we have not received any indication that the information has been used by an unauthorized individual." The U.S. Department of Health and Human Service's HIPAA Breach Reporting Tool website, or "wall of shame," indicates that Blue Spring on July 10 reported the breach as a hacking/IT incident involving its electronic medical records and network server that exposed data on nearly 45,000 individuals. Blue Spring's front desk receptionist, who did not want to be identified by name, told Information Security Media Group Friday that the investigation into the ransomware attack had not yet determined the source of the ransomware attack, the source of the other malware discovered, whether the other malware might have been present on the practice's systems before the ransomware attack, or whether the infections were all part of the same attack. She said the practice chose to "rebuild" its systems and did not pay a ransom.


CIOs reveal their security philosophies

 CIOs reveal their security philosophies
“Overly strict security creates a different risk — throttling information exchange and creativity can threaten a company’s competitive viability,” Johnson adds. “Poorly managed reactions to breaches — and all firms have been breached in some way — can lead to other business deterioration.” “Security is as much a human challenge as it is a technical challenge,” he concludes. “Dependable cybersecurity requires a three-part strategy of 1) superb technical implementation of the basics, 2) consistent education aimed at increasing awareness of employees, vendors, and executives, and 3) building a security team that is as motivated, skilled, and innovative as the bad guys.” In this edition of Transformation Nation, CIOs delineate their own IT security philosophies — dispatches from the front lines of cybersecurity strategy. The implications of a breach for corporate reputation, economic well-being, and personal security are immense. Through these accounts, CIOs reveal the many tension points in application and communication that they grapple with every day


GDPR means it is time to revisit your email marketing strategies

No matter how private you think your emails are, every email you send and receive is stored on a remote hard drive you have no control over. If your email provider doesn’t encrypt your emails from end-to-end, (most don’t), all company emails are at risk. Encrypting employee email communications plays a huge role in maintaining GDPR compliance. The average employee won’t think twice about emailing co-workers about sensitive issues that may include data from the business database. For example, someone might send a customer’s credit card information to the sales department for processing a return. To protect your internal emails and maintain GDPR compliance, buying general encryption services isn’t enough. You need to know exactly how and when the data is and isn’t being encrypted. Not all encryption services are complete. For instance, if you’re using Microsoft 365, you’ve probably heard of a data protection product called Azure RMS. This product uses TLS security to encrypt email messages the moment they leave a user’s device. Unfortunately, when the messages reach Microsoft’s servers, they are stored unprotected. 


Google, Cisco amp-up enterprise cloud integration

hybrid cloud
The Cisco/Google combination – which is currently being tested by an early access enterprise customer, according to Google – will let IT managers and application developers use Cisco tools to manage their on-premises environments and link it up with Google’s public IaaS cloud which offers orchestration, security and ties to a vast developer community. In fact the developer community is one area the companies have targeted recently by announcing a Cisco & Google Cloud Challenge, which is offering prizes worth over $160,000 to develop what Cisco calls “game-changing” apps using Cisco’s Container Platform with Google Cloud services. Cisco says the goal is to bring together its DevNet community and Google’s Technology Partners to bring new hybrid-cloud applications for enterprise customers. Cisco VP & CTO of DevNet Susie Wee wrote in a blog that in preparation for the Challenge, DevNet is offering workshops, office hours, and sandboxes using Cisco Container Platform with Google Cloud services to help customers and developers learn how to connect cloud data from a private cloud to the Google Cloud Platform or even data from edge devices to run analytics and employ machine learning.


Why 'Sophisticated' Leadership Matters -- Especially Now


When challenged by complexity, many leaders try to implement best practices such as lean management, restructuring or re-engineering. Such investments may indeed be necessary, but they are rarely sufficient. This is because the root cause of most stalls is that the leader has run up against the limits of his or her leadership sophistication. In other words, the leader is failing to reinvent him- or herself as the new kind of leader the organization now needs. This usually means that the leader doesn’t fully appreciate that intelligence, hard work and technical knowledge must now take a back seat to enhanced personal, interpersonal, political and strategic leadership capabilities. In other words, you will stall not because the complex challenges you face require changes in your organization. But rather because the sophisticated challenges require change in yourself. So how can you become a more sophisticated leader? Try pulling back, elevating your viewpoint and figuring out how you can take yourself to the next level.



Quote for the day:



"Next generation leaders are those who would rather challenge what needs to change and pay the price than remain silent and die on the inside." -- Andy Stanley


Daily Tech Digest - July 30, 2018


        Chamber of Digital Commerce Sets Out ICO and Token Guidelines
Former Securities and Exchange Commission (SEC) commissioner and CEO of Patomak Global Partners Paul Atkins comments, “These principles are an important tool for responsible growth and smart regulation that strikes the right balance between protecting investors while allowing for innovation in this new technological frontier. We think it is important to explain the unique attributes of blockchain-based digital assets, which are not all strictly investment based, and provide guidance to consumers, regulators and the industry.” The whitepaper is broken up into three distinct sections. The first offers a comprehensive overview of current and future regulations to give investors a stronger understanding of securities laws in the U.S., Canada, the U.K. and Australia. The second part showcases industry-developed principles for both trading platforms and token sponsors to better promote safe and legal business practices and lower the risks to organizers and traders. 


panasonic-rugged-tablet.png
The Toughbook T1 has a 5-inch screen and runs Android 8.1 Oreo. It allows retail workers, warehouse employees, or transportation and logistics employees to quickly scan barcodes for better productivity. The device also has a built-in barcode reader and high-speed connectivity to integrate with resource management systems and databases. The FZ-T1 is available in two models—one with Wi-Fi connectivity only, and another offering voice and data connection on AT&T and Verizon networks, as well as data connectivity through P.180, Panasonic's purpose-built network. The Toughbook L1 is a professional-grade tablet that can be mounted in a vehicle or used as a handheld device. It has a 7-inch screen and runs Android 8.1 Oreo. It includes an integrated barcode reader that is field-configurable for landscape or portrait modes. The L1 will be released in a Wi-Fi only model that supports data service on Verizon, AT&T and Panasonic's P. 180.


IBM banks on the the blockchain to boost financial services innovation

On Monday, the tech giant said a proof-of-concept (PoC) design has been created for the platform, dubbed LedgerConnect. The system is a distributed ledger technology (DLT) platform intended for enterprise financial services companies including banks, buy and sell-side firms, FinTechs and software vendors. The goal for LedgerConnect is to bring these companies together to deploy, share, and use blockchain-based services hosted on the network in order to make adoption more cost effective for companies, as well as easier to access and to deploy. Services will include Know Your Customer (KYC) processes, sanctions screening, collateral management, derivatives post-trade processing and reconciliation and market data. "By hosting these services on a single, enterprise-grade network, organizations can focus on business objectives rather than application development, enabling them to realize operational efficiencies and cost savings across asset classes," IBM says.


Don’t Let your Data Lake become a Data Swamp


To cope with the growing volume and complexity of data and alleviate IT pressure, some are migrating to the cloud. But this transition—in turn—creates other issues. For example, once data is made more broadly available via the cloud, more employees want access to that information. Growing numbers and varieties of business roles are looking to extract value from increasingly diverse data sets, faster than ever—putting pressure on IT organizations to deliver real-time, data access that serves the diverse needs of business users looking to apply real-time analytics to their everyday jobs. However, it’s not just about better analytics—business users also frequently want tools that allow them to prepare, share, and manage data.To minimize tension and friction between IT and business departments, moving raw data to one place where everybody can access it sounded like a good move. The concept of the data lake first coined by James Dixon in 2014 expected the data lake to be a large body of raw data in a more natural state where different users come to examine it, delve into it, or extract samples from it.


3 Ways Automation & Integration Is Disrupting the HIT Status Quo

Integrated patient engagement solutions empower patients along the continuum of their healthcare experience, pre-visit to post-visit, with features such as self-scheduling, online access to consent forms and personal information, and communications with their providers via a user-friendly patient portal. And by engaging patients with this end-to-end lifecycle approach, practices can increase patient satisfaction rates, patient retention and referrals. ... “We wanted something that was easy to use for the patients and staff, straightforward, less expensive than our current solution, available to all our providers, and that would offer greater transparency to patients, particularly on which insurances we take,” notes Jared Boundy, MHA, director of operations for Washington-based Dermatology Arts. “We also felt that it needed to integrate with the other systems we already had in place. It had to be adaptable, too, as we didn’t want to pay an arm and a leg every time we added a provider or a location.”


AI Software Development: 7 things you need to Know

AI in Software Development
At the initial stage, Machine learning needs substantial computing resources, meanwhile, the data processing stage is not so challenging. Previously, this varying requirement in computing resources was difficult for those who wanted to implement machine learning but were unwilling to make big one-time investments to purchase servers that were adequately powerful. As the cloud technology emerged, the possibility of satisfying this requirement became easy. AI software development services can rely on either the corporate or commercial cloud, for e.g. Microsoft or AWS etc. ... As artificial intelligence techniques become mature, more are interested in using these practices to control complex real-world systems that have solid deadlines. ...  AI is a huge field and with a wide area to cover, it is difficult to refer just one single programming language. Of course, there are a variety of programming languages that can be used but not all offer the best value for your effort. These languages are considered to be best options for AI considering their simplicity, prototyping capabilities, usefulness, usability and speed – they are Python, Java, Lisp, Prolog, C++ etc.


Connecting whilst building – benefits of the IoT in construction

Connecting whilst building – benefits of the IoT in construction image
While IIoT opens the door to a host of new opportunities such as cost reduction, worker safety, quality improvement and business growth, the prospect of gearing up for the next industrial revolution can cause apprehension. Implementing IIoT solutions can change the way IT interacts with production systems and field devices but if this is matched with the right approach to connectivity, and realising the potential of the servitisation model, it needn’t keep construction companies awake at night. Connectivity is the lifeblood of the IoT and this is just as true in an industrial setting. Field connectivity is indispensable for conveying commands to field systems and devices in addition to acquiring data for further analysis. It tends to be a cross-cutting and cross-layer function in IIoT systems as both edge and cloud modules are able to access field data directly using one of a large number of protocols. These include OPC-UA (Unified Architecture, MQTT (Message Queue Telemetry Transport), DDS (Data Distribution Service), oneM2M and various other protocols as illustrated in the Industrial Internet Connectivity Framework.


Pushing the Boundaries of Computer Vision

Although augmented reality has occasionally been described as a bridge to true virtual reality, AR is actually more difficult to implement in some ways. Nevertheless, the technology has evolved rapidly in recent years, thanks in part to computer vision advances. At the core of AR is a challenge relevant to other fields of computer vision: Object recognition. Small variations in objects can prove challenging for imagine recognition software, and even a change in lighting can cause mismatches. Experts at Facebook and other companies have made tremendous progress through deep learning and other artificial intelligence fields, and these advances have the potential to make AR and other vision fields dependent on object recognition more powerful in the coming years. Another transformative use-case is predicted to be agriculture. Agricultural science is charged with feeding the world, and computers have been making major strides in the field in recent years. Because farms are so large and often remote, image recognition enables individual farmers to be far more effective. Computer vision capable of detecting fruit can help farmers track progress and determine the right time for harvest.


Monitoring Your Data Center Like a Google SRE

Monitoring Your Data Center
SLO is used to define what SREs call the “error budget,” which is a numeric line in the sand. The error budget is used to encourage collective ownership of service availability and blamelessly resolve disputes about balancing risk and stability. For example, if programmers are releasing risky new features too frequently and compromising availability, this will deplete the error budget. SREs can point to the at-risk error budget, and argue for halting releases and refocusing coders on efforts to improve system resilience. This approach lets the organization as a whole balance speed and risk with stability effectively. Paying attention to this economy encourages investment in strategies that accelerate the business while minimizing risk: writing error- and chaos-tolerant apps, automating away pointless toil, advancing by means of small changes and evaluating “canary” deployments before proceeding with full releases. Monitoring systems are key to making this whole, elegant tranche of DevOps/SRE discipline work. It’s important to note that this has nothing to do with what kind of technologies you’re monitoring, the processes you’re wrangling or the specific techniques you might apply to stay above your SLOs.


Utilize microservices to support a 5G network architecture


Microservices is the ideal cloud-based architecture for 5G, rather than a monolithic architecture. Only microservices can properly support a 5G network architecture, because no set of monolithic applications can deliver the same requirements of responsiveness, flexibility, updatability and scalability that 5G demands. Virtualized network services also must adapt to new technologies and demands on the system as they come along. With a microservices-based architecture, this is a relatively easy task, accomplished via changes to individual microservices rather than the whole system. The technologies included in 5G will likely change rapidly after the initial rollout, so this kind of adaptability is a necessity. Additionally, signal-related expectations of 5G, such as high availability, require the kind of flexibility that microservices can deliver. According to NGMN, remote-location equipment should be self-healing, which means it requires flexible, built-in, AI-baseddiagnostic and repair software capable of at least re-establishing lost communication when isolated.



Quote for the day:


"The People That Follow You Are A Reflection Of Your Leadership." -- Gordon TredGold