Daily Tech Digest - September 03, 2019

Cloud 2.0: A New Era for Public Cloud

Image: natali_mis - stock.adobe.com
Security remains a primary concern for companies moving to the cloud, even though public cloud providers offer security capabilities like data classification tools and even whole cloud environments tailored to meet industry-specific specifications - both of which Deloitte names as vectors to cloud progress. “A lot of times, one of the first things companies do in the cloud is migrate existing apps, workloads and the data they operate on to the cloud. The security model in the cloud is rather different, and sometimes data and assets need to be secured in a more granular way, so data classification is part and parcel of a prudent migration to the cloud,” Schatsky says. ... “There are apps written in the old mode of app dev and to convert them to the world of cloud takes time, effort and a willingness to do so. Plus, it takes skill. It’s not a nontrivial task. Those are the things that are slowing the process of moving everything to the cloud.” Schatsky agrees. “For a lot of companies, they’re dealing with incubating the skills they need to take full advantage of the cloud. When companies start by moving wholesale to the cloud, the biggest need they have is to just propagate the impact on their workflow and operating models that the cloud enables. You can’t rush that. It’s a human capital thing that takes time,” he says.


The Path to Modern Data Governance

It is worth noting that the longest list of activities is the people list. This is typical, as having all of the right people, engaged in the right ways, is critical to data governance success. The processes and methods lists are tied for 2nd longest. People, processes, and methods are at the center of effective data governance. The example shown in figure 3 illustrates the idea that we have selected a subset of the activities – not all of them – for initial planning. (The color coding here is different, mapping activities to projects.) To make modernization manageable and practical, it is important to make conscious decisions about what NOT to do. The selected activities are organized based on affinity – they seem to fit together and make sense as a project. They are also organized and based on dependencies – what makes sense to do in what sequence. Note here that the activities in a single project don’t necessarily all come from the same layer of the framework. The bottom sequence in green, for example, includes two activities from the culture layer, one from the methods layer, and one from the people layer.


Industry calls for joint participation to cement Australia's digital future


The report outlined how universities and publicly funded research agencies needed to reshape their research culture to safeguard and strengthen the country's digital workforce and capability pipeline, by placing substantially higher emphasis on industry experience, placements, and collaborations in hiring, promotion, and research funding. At the same time, there are also recommendations about how to lift the skills of teachers on ICT-related topics, and the need to increase diversity, particularly women, while removing structural barriers that cause the loss of knowledge, talent, and educational investment from the ICT and engineering sectors. "Attracting high-quality international students to, and retaining them in, Australia after they graduate is a good way to expand the diversity of the ICT skill base and to promote greater international engagement, not least of which with the home countries of those people. We should make it easier to keep such people after the end of their formal studies," the report said. Another recommendation the report made included the need for government to undertake a future-readiness review for the Australian digital research sectors, as well as to monitor, evaluate, and optimise the applied elements of the federal government's National Innovation and Science Agenda and the Australia 2030 Plan.


Is the tech skills gap a barrier to AI adoption?


Without the right workforce, organisations simply cannot proceed to tackle the technical challenges existing in a data-driven industry. This can help to reverse the inconsistencies and set-backs with data-led AI projects. With the right analytics platform, data capabilities can be put in the hands of the business experts who not only have the context of the questions to solve but the data sources needed to deliver insights at speed. Trained data scientists will still be required, but the shortage of them does not mean all activity, or some level of a project, can’t be tested and iterated and progressed. Existing employees should be still able to perform some levels of data tasks despite not being experts. They are in the line-of-business, close to the questions, the data, and the leaders who need insight. Linking up data insight for people with the vital business knowledge is paramount to making the most of data analytics and fuel business progress. What’s more, getting data in the hands of the people is crucial in order to democratise AI and make advanced analytics more accessible to everyone, rather than locked away by a ‘priestly caste’ of data scientists.


USBAnywhere Bugs Open Supermicro Servers to Remote Attackers


USBAnywhere stems from several issues in the way that BMCs on Supermicro X9, X10 and X11 platforms implement virtual media, which is an ability to remotely connect a disk image as a virtual USB CD-ROM or floppy drive. “When accessed remotely, the virtual media service allows plaintext authentication, sends most traffic unencrypted, uses a weak encryption algorithm for the rest and is susceptible to an authentication bypass,” according to the paper. “These issues allow an attacker to easily gain access to a server, either by capturing a legitimate user’s authentication packet, using default credentials, and in some cases, without any credentials at all.” Once connected, the virtual media service allows the attacker to interact with the host system as a raw USB device. This means attackers can attack the server in the same way as if they had physical access to a USB port, such as loading a new operating system image or using a keyboard and mouse to modify the server, implant malware, or even disable the device entirely. “Taken together, these weaknesses open several scenarios for an attacker to gain unauthorized access to virtual media,” according to Eclypsium.


Risk mitigation is key to blockchain becoming mainstream

A solution in search of a problem, blockchain is often associated with cryptocurrency, which is, arguably, the single worst application of the “immutable” ledger that defines the technology. Supply chains are a much better use, due to the high levels of integrity and availability provided by a blockchain. A blockchain is essentially a piece of software, run on multiple computers (or nodes) that work together as participants of a distributed network to produce a record of transactions submitted to that network in a ledger. The ledger is made of blocks that are produced when nodes run complex cryptographic functions, which are chained together to produce a blockchain. Nodes perform validation of each block that is created to verify its integrity and ensure it has not been tampered with. If a majority of nodes validate the block, consensus is reached, confirming the recorded transactions to be true. The block is added to blockchain and the ledger is updated.


MDM can tame and monetize IoT data explosion


To achieve success at large scale, Bonnet says a company's MDM system must allow for an agile delivery process. "It is almost impossible to be sure about the data structure, semantics, and governance process a company needs to start, and the prediction for the future is so hard to establish, even impossible," he laments. The inability to know the future is the key reason for the agility mindset. This is a vital awareness. "If the MDM system is not agile enough, then all the existing systems running in a company could be slowed in their ability to change. There is also a potential for poor integrating with the MDM system which will not improve the data quality, and may have the opposite effect," he continues. He suggests that checking two points: first, the MDM system must be agile, without a rigid engineering process that could delay the delivery of the existing systems. This is what is called a "model-driven MDM" for which the data semantics will drive a big part of the expected delivery in an automatic process.


Data-Driven Design Is Killing Our Instincts


Design instinct is a lot more than innate creative ability and cultural guesswork. It’s your wealth of experience. It’s familiarity with industry standards and best practices. You develop that instinct from trial and error — learning from mistakes. Instinct is recognizing pitfalls before they manifest into problems, recognizing winning solutions without having to explore and test endless options. It’s seeing balance, observing inconsistencies, and honing your design eye. It’s having good aesthetic taste, but knowing how to adapt your style on a whim. Design instinct is the sum of all the tools you need to make great design decisions in the absence of meaningful data. ... Not everything that can be counted counts. Not everything that counts can be counted. Data is good at measuring things that are easy to measure. Some goals are less tangible, but that doesn’t make them less important. While you’re chasing a 2% increase in conversion rate you may be suffering a 10% decrease in brand trustworthiness. You’ve optimized for something that’s objectively measured, at the cost of goals that aren’t so easily codified.


How to Use Chaos Engineering to Break Things Productively

There are a number of variables that can be simulated or otherwise introduced into the process. These should reflect actual issues that might occur when an app is in use and prioritized by the likelihood of occurrence. Problems that can be introduced include hardware-related issues like malfunctions or a server crash as well as process errors related to sudden traffic spikes or sudden growth. For example, what might happen during the whole online shopping experience if a seasonal sale results in a larger than expected customer response? You can also simulate the effects of your server being the target of a DDoS attack that's designed to crash your network. Any event that would disrupt the steady state is a candidate for experimentation. Compare your results to the original hypothesis. Did the system perform as anticipated, beyond expectations, or produce worse results? This evaluation shouldn't be undertaken in a vacuum, but include input from team members and services that were utilized to conduct the experiment.



Quote for the day:


"True leaders bring out your personal best. They ignite your human potential" -- John Paul Warren


No comments:

Post a Comment