Daily Tech Digest - May 12, 2020

Is it time to believe the blockchain hype?

Is it time to believe the blockchain hype?
While much criticised at the outset and subsequently watered down to appease regulators, Libra has also triggered discussion around central bank digital currencies (CBDCs) with almost every major central bank announcing their attention to explore the possibilities of these. Among the numerous benefits to CBDCs, the most oft-repeated is addressing the decline in the use of cash, something which has accelerated this year with more shopping taking place online and bricks-and-mortar retailers ceasing to accept paper money. According to a recent report by campaign group Positive Money the disappearance of cash would lead to an essential privitisation of money with commercial banks holding an oligopoly over digital money and payment systems. Such a situation would also prove damaging for the unbanked population, which still totals an estimated 1.7 billion people worldwide. While these people do not have access to the current financial systems due to not being able to prove their identities, they could use digital currencies provided they have a mobile phone and an internet connection.



Banks failing to protect customers from coronavirus fraud


A paltry 13 out of the 64 banks accredited by the UK government for its Coronavirus Business Interruption Loan Scheme (CBILS) have bothered to implement the strictest level of domain-based messaging authentication, reporting and conformance – or Dmarc – protection to stop cyber criminals from spoofing their identity to use in phishing attacks. This means that 80% of accredited banks are unable to say they are proactively protecting their customers from fraudulent emails, and 61% have no published Dmarc record whatsoever, according to Proofpoint, a cloud security and compliance specialist. Domain spoofing to pose as a government body or other respected institution, such as a provider of financial services, is a highly popular method used by cyber criminals to compromise their targets. Using this technique, they can make an illegitimate email appear as if it is coming from a supposedly completely legitimate email address, which neatly gets around one of the most obvious ways people have of spotting a phishing email – the address does not match the institution in any way.


Flattening The Curve On Cybersecurity Risk After COVID-19

PC key with the red word Covid-19
This is an opportunity but also a big risk for them. Many of them know their digital business system is vital to helping them navigate this change. But periods of disruption, whether driven by good or bad circumstances, present opportunities for hackers. So that cybersecurity risk gap I talked about earlier between threats and defensibility isn’t going to close naturally; that curve isn’t flattening. New cybersecurity risks are going to continue to emerge, and defensive capabilities have to continue to try to stay ahead. A common question that a lot of board members ask, is “Are we spending the right amount on cybersecurity?” That’s the wrong question. The right question is, “What do we need to protect, what’s the value of what we are trying to protect, and how secure is it for what we’re spending?” That’s their challenge heading into what could be massive waves of systemic change. The business value that their digital business systems drive is only increasing, and the threats to that value are only going to go up.


Architecture Decision for Choosing Right Digital Integration Patterns – API vs. Messaging vs. Event

Figure 2 – Messaging based integration
Direct Application Programming Interface (API), allows two heterogeneous applications to talk to each other. For example, each time we use an app on our mobile devices, the app is likely making several API calls to various digital services. Direct APIs can be designed to be Blocking (Synchronous) or Non-Blocking (Asynchronous). Of these, Non-Blocking APIs are preferred to ensure resources are not blocked when the consumer is waiting for a response from provider. Non-blocking APIs also help create independently scalable integration model between API Consumers and API Provider ... A Message is fundamentally an asynchronous mode of communication between two applications — it is an indirect invocation, such that two applications do not directly connect to each other. Thus, the Messaging technique decouples the consumer and provider, and removes the need of provider being available at exact same point in time as the consumer. It also addresses the scalability limitations of the provider.


Machine learning algorithms explained

Machine learning algorithms explained
Machine learning algorithms train on data to find the best set of weights for each independent variable that affects the predicted value or class. The algorithms themselves have variables, called hyperparameters. They’re called hyperparameters, as opposed to parameters, because they control the operation of the algorithm rather than the weights being determined. The most important hyperparameter is often the learning rate, which determines the step size used when finding the next set of weights to try when optimizing. If the learning rate is too high, the gradient descent may quickly converge on a plateau or suboptimal point. If the learning rate is too low, the gradient descent may stall and never completely converge. Many other common hyperparameters depend on the algorithms used. Most algorithms have stopping parameters, such as the maximum number of epochs, or the maximum time to run, or the minimum improvement from epoch to epoch. Specific algorithms have hyperparameters that control the shape of their search. For example, a Random Forest Classifier has hyperparameters for minimum samples per leaf, max depth, minimum samples at a split, minimum weight fraction for a leaf, and about 8 more.


COVID-19 Impact on the Future of Fintech

Besides their age, scalability and financial condition, the outlook of many fintech organizations will also be driven by the product category they are in. This is especially true in the near term, when the impact of the pandemic on consumer behavior is expected to be the greatest. According to BCG, negative impact of COVID-19 will be more severe for those fintechs in international payments, unsecured and secured consumer lending, small business lending and for those where risks may be highest. It is believed that those fintech firms focused on B2B banking are less vulnerable as a group. ... As could be expected, technology providers were some of the early winners when COVID-19 hit as traditional banking organizations scurried to deploy digital solutions to meet consumer demand. Many of the sales were initiatives already agreed to but not yet implemented until market conditions required immediate action. It will be interesting to see if investment in technology and digital solutions continues as traditional financial institutions are forced to reduce costs.


Simplicity and Security: What Commercial Providers Offer for the Service Mesh


Whatever the maturity level, one of the advantages of a commercial offering is support. There’s no easy way to get advice or troubleshooting from purely open-source service meshes. For some organizations that doesn’t matter, but for others, the knowledge that there’s someone to call in case of a problem is critical — and might even be baked into corporate governance policies. One of the benefits of using a sidecar proxy service mesh with Kubernetes, Jenkins said, is that it allows a smaller central platform team to manage a large infrastructure, and it reduces the burden on application developers to manage anything related to infrastructure management. Using a commercial service mesh provider lets organizations even further reduce the need to manage infrastructure internally, he says. Austin agreed that one of the things that makes a service mesh “enterprise-grade” is increased operational simplicity, making it as simple as possible for small platforms to manage huge application suites. For enterprises, that translates to the ability to spend more engineering resources on feature development and creating business value and less on infrastructure management.


Sacrificing Security for Speed: 5 Mistakes Businesses Make in Application Development

application development
Data tends to be the most important and valuable aspect of modern web applications. Poor application design and architecture leads to data and security breaches. Application development teams generally assume that by providing the right authentication and authorization measures to the application, data will be protected. This is a misconception. Right measures to provide data security involve focussing on data integrity, fine grained data access and encrypting data while in rest as well as in motion. In addition, data security needs to be looked at holistically from the time the request is made to the time response is sent back across all layers of the application runtime. Today’s modern web applications are highly sophisticated and built with a big focus on simplistic user experience combined with high scalability. This combination can be challenging for application development teams from a security perspective. Most development teams focus only on silos when securing the application.


Google vs. Oracle: The next chapter


So, what next? Gesmer speculates: "We will have to see what the parties have to say on this issue when they file their briefs in August. However, a decision based on a narrow procedural ground such as the standard of review is likely to be attractive to the Supreme Court. It allows it to avoid the mystifying complexities of copyright law as applied to computer software technology. It allows the Court to avoid revisiting the law of copyright fair use, a doctrine the Court has not addressed in-depth in the 26 years since it decided Campbell v. Acuff-Rose Music, Inc. It enables it to decide the case on a narrow standard-of-review issue and hold that a jury verdict on fair use should not be reviewed de novo on appeal, at least where the jury has issued a general verdict." In other words, Oracle will lose and Google will win… for now. We still won't have an answer on the legal question that programmers want to know: What extent, if any, does copyright cover APIs? For an answer to that my friends, we may have to await the results of yet another Oracle vs. Google lawsuit. It may be wiser for Oracle to finally leave this issue alone. As Charles Duan, the director of Technology and Innovation Policy at the R Street Institute, a Washington DC non-profit think tank and Google ally, recently argued: Oracle itself is guilty of copying Amazon's S3 APIs.


2020 State of Testing Report

It is very difficult for us to define exactly what we "see" in the report. The best description for it might be a "feeling" of change, maybe even of evolution. We are seeing many indications reinforcing the increasing collaboration of test and dev, showing how the lines between our teams are getting blurrier with time. We are also seeing how the responsibility of testers is expanding, and the additional tasks that are being required from us in different areas of the team's tasks and challenges. ... I feel it makes testers critically think about the automation strategy they can have that best suits their context and make it reliable and meaningful. The flip side of it that I see sometimes is that if their automation strategy is not smart enough (or say if their CI/CD infrastructure is lame) testers end up just writing more automation and spending enormous time in just maintaining it for the sake of keeping the pipeline green. These efforts hardly contribute to the user-facing quality of the product and add no meaningful value.



Quote for the day:


"To have long term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley


Daily Tech Digest - May 11, 2020

How to choose a cloud IoT platform

How to choose a cloud IoT platform
“The internet” is not an endpoint, of course, but an interconnected collection of networks that transmit data. For IoT, the remote endpoints are often located in a cloud server rather than in a single server inside a private data center. Deploying in a cloud isn’t absolutely necessary if all you’re doing is measuring soil moisture at a bunch of locations, but it can be very useful. Suppose that the sensors measure not only soil moisture, but also soil temperature, air temperature, and air humidity. Suppose that the server takes data from thousands of sensors and also reads a forecast feed from the weather service. Running the server in a cloud allows you to pipe all that data into cloud storage and use it to drive a machine learning prediction for the optimum water flow to use. That model could be as sophisticated and scalable as you want. In addition, running in the cloud offers economies. If the sensor reports come in once every hour, the server doesn’t need to be active for the rest of the hour. In a “serverless” cloud configuration, the incoming data will cause a function to spin up to store the data, and then release its resources. Another function will activate after a delay to aggregate and process the new data, and change the irrigation water flow set point as needed.



How to create a ransomware incident response plan


Companies should test an incident response plan -- ideally, before an incident, as well as on a regular basis -- to ensure it accomplishes its intended results. Using a tabletop exercise focused on testing the response to a ransomware incident, participants can use existing tools to test their effectiveness and determine if additional tools are necessary. Companies may want to have annual, quarterly or even monthly exercises to test the plan and prepare the business. These tests should involve all the relevant parties, including IT staff, management, the communications team, and the public relations (PR) and legal teams. Enterprises should also document which of their security tools have ransomware prevention, blocking or recovery functionality. Additional tests should be conducted to verify simulated systems infected with ransomware can be restored using a backup in a known-good state. While some systems save only the most recent version of a file or a limited number of versions, testing to restore the data, system or access to all critical systems is a good idea.


Report: Chinese-linked hacking group has been infiltrating APAC governments for years

naikon-map-image.jpg
Check Point has found three versions of the attack— infected RTF files, archive files containing a malicious DLL, and a direct executable loader. All three worm their way into a computer's startup folder, download additional malware from a command and control server, and go to work harvesting information. The report concludes that Naikon APT has been anything but inactive in the five years since it was discovered. "By utilizing new server infrastructure, ever-changing loader variants, in-memory fileless loading, as well as a new backdoor — the Naikon APT group was able to prevent analysts from tracing their activity back to them," Check Point said in its report. While the attack may not appear to be targeting governments outside the APAC region, examples like these should serve as warnings to other governments and private organizations worried about cybersecurity threats.  One of the reasons Naikon APT has been able to spread so far is because it leverages stolen email addresses to make senders seem legitimate. Every organization, no matter the size, should have good email filters in place, and should train employees to recognize the signs of phishing and other email-based attacks.


Patterns for Managing Source Code Branches


With distributed version control systems like git, this means we also get additional branches whenever we further clone a repository. If Scarlett clones her local repository to put on her laptop for her train home, she's created a third master branch. The same effect occurs with forking in github - each forked repository has its own extra set of branches. This terminological confusion gets worse when we run into different version control systems as they all have their own definitions of what constitutes a branch. A branch in Mercurial is quite different to a branch in git, which is closer to Mercurial's bookmark. Mercurial can also branch with unnamed heads and Mercurial folks often branch by cloning repositories. All of this terminological confusion leads some to avoid the term. A more generic term that's useful here is codeline. I define a codeline as a particular sequence of versions of the code base. It can end in a tag, be a branch, or be lost in git's reflog. You'll notice an intense similarity between my definitions of branch and codeline. Codeline is in many ways the more useful term, and I do use it, but it's not as widely used in practice.


Image and object recognition
The recognition pattern is notable in that it was primarily the attempts to solve image recognition challenges that brought about heightened interest in deep learning approaches to AI, and helped to kick off this latest wave of AI investment and interest. The recognition pattern however is broader than just image recognition In fact, we can use machine learning to recognize and understand images, sound, handwriting, items, face, and gestures. The objective of this pattern is to have machines recognize and understand unstructured data. This pattern of AI is such a huge component of AI solutions because of its wide variety of applications. The difference between structured and unstructured data is that structured data is already labelled and easy to interpret. However unstructured data is where most entities struggle. Up to 90% of an organization's data is unstructured data. It becomes necessary for businesses to be able to understand and interpret this data and that's where AI steps in. Whereas we can use existing query technology and informatics systems to gather analytic value from structured data, it is almost impossible to use those approaches with unstructured data.


Al Baraka Bank Sudan transforms into an intelligent bank with iMAL*BI


The solution comprises comprehensive data marts equipped with standard facts and dimensions as well as progressive measures that empower the bank’s workforce to build ad-hoc dashboards, in-memory, to portray graphical representations of their data queries. It is rich in out-of-the-box dashboards, covering financial accounting, retail banking, corporate banking, investments, trade finance, limits, in addition to C-level executives’ analytics boosting a base set of KPIs, dashboards and advanced analytics which are essential to each executive with highly visual, interactive and collaborative dashboards backed by centralised metadata security. This strategic platform empowers bankers to make smarter, faster, and more effective decisions, improving operational efficiency. It also enables business agility while driving innovation, competitive differentiation, and profitable growth. The implementation covered the establishment of a comprehensive end-to-end data warehousing solution, an automated ETL process and a progressive data model.


The new cybersecurity resilience


While security teams and experts might have differing metrics for gauging resiliency, they tend to agree on the overarching need and many of the best practices to achieve it. “Resiliency is viewed by some to be the latest buzzword replacing continuity or recovery, but to me it really means placing the appropriate people, processes, and procedures in place to ensure you’re limiting the need for enacting a continuity or recovery plan,” says Shared Assessments Vice President and CISO Tom Garrubba. Resilient organizations share numerous traits. According to Accenture they place a premium on collaboration – 79 percent say collaboration will be key to battling cyberattacks and 57 percent collaborate with partners to test resilience. “By adopting a realistic, broad-based, collaborative approach to cybersecurity and resilience, government departments, regulators, senior business managers and information security professionals will be better able to understand the true nature of cyber threats and respond quickly, and appropriately,” says Steve Durbin, managing director at the Information Security Forum (ISF).


Microsoft and Intel project converts malware into images before analyzing it

stamina-steps.png
The Intel and Microsoft team said that resizing the raw image did not "negatively impact the classification result," and this was a necessary step so that the computational resources won't have to work with images consisting of billions of pixels, which would most likely slow down processing. The resides images were then fed into a pre-trained deep neural network (DNN) that scanned the image (2D representation of the malware strain) and classified it as clean or infected. Microsoft says it provided a sample of 2.2 million infected PE (Portable Executable) file hashes to serve as a base for the research. Researchers used 60% of the known malware samples to train the original DNN algorithm, 20% of the files to validate the DNN, and the other 20% for the actual testing process. The research team said STAMINA achieved an accuracy of 99.07% in identifying and classifying malware samples, with a false positives rate of 2.58%. "The results certainly encourage the use of deep transfer learning for the purpose of malware classification," said Jugal Parikh and Marc Marino


Prepare for the future of distributed cloud computing

Prepare for the future of distributed cloud computing
Enterprises need to support edge-based computing systems, including IoT and other specialized processing that have to occur near the data source. This means that while we spent the past several years centralizing processing storage in public clouds, now we’re finding reasons to place some cloud-connected applications and data sources near to where they can be most effective, all while still maintaining tight coupling with a public cloud  provider. Companies need to incorporate traditional systems in public clouds without physical migration. If you consider the role of connected systems, such as AWS’s Outpost or Microsoft’s Azure Stack, these are really efforts to get enterprises to move to public cloud platforms without actually running physically in a public cloud. Other approaches include containers and Kubernetes that run locally and within the cloud, leveraging new types of technologies, such as Kubernetes federation. The trick is that most enterprises are ill-equipped to deal with distribution of cloud services, let alone move a critical mass of applications and data to the cloud.


Source Generators Will Enable Compile-time Metaprogramming in C# 9

Loosely inspired by F# type providers, C# source generators respond to the same aim of enabling metaprogramming but in a completely different way. Indeed, while F# type providers emit types, properties, and methods in-memory, source generators emit C# code back into the compilation process. Source generators cannot modify existing code, only add new code to the compilation. Another limitation of source generators is they cannot be applied to code emitted by other source generators. This ensures each code generator will see the same compilation input regardless of the order of their application. Interestingly, source generators are not limited to inspecting source code and its associated metadata, but they may access additional files. Specifically, source generators are not designed to be used as code rewriting tools, such as optimizers or code injectors, nor are they meant to be used to create new language features, although this would be technically feasible to some limited extent.



Quote for the day:


"You’ve got to get up every morning with determination if you’re going to go to bed with satisfaction." -- George Lorimer


Daily Tech Digest - May 10, 2020

Opinion: Responsible AI starts with higher education


“This new algorithm will need a lot of pictures of people. What if we use a morgue so we don’t have to worry about consent?” Although this is a fictitious example, modern-day tech workers often face similar questions. Why? Because the rise of artificial intelligence based on machine learning has created a new class of sociotechnical challenges. Now is the time for industry and universities to acknowledge these new challenges and step up to meet them. Since the beginning of the technology industry, educational institutions, legislatures, companies, and developers have worked to improve the quality of products and services. The resulting curricula, laws, corporate policies, standards, and development approaches have provided frameworks for engineers and product managers. Emerging technologies require the development of new frameworks. In the early 2000s, industry had to get serious about computer security. Today, we have a new challenge: How do you turn the goal of responsible AI into code?



How to help data scientists adapt to business culture


Businesses themselves don't understand what the data science discipline is, the work backgrounds from where data scientists are coming, and what it's going to take to acculturate these highly trained data engineers to how a business operates and what it needs. Many data scientists have lived their lives in environments funded by university grants that enabled them to pursue highly theoretical projects that are all about the quest for answers but not necessarily about finding definitive solutions for why customers seem to be suddenly favoring another brand, or why your manufactured products are suddenly experiencing more failures. Companies also struggle with integrating data scientists with their existing business and IT workforces. Often, existing business units and IT have little in common with data scientists, and there are no existing workflows that can help them learn how to optimally work together. Another issue is that businesses aren't always sure what (and when) to expect analytics and results out of their big data projects. Successful use cases exist in most industries, but companies still don't have a good feel for knowing when a data science or analytics project is moving forward and when it is stagnating.



20 ways banks can get AI right

AI
Try to create ‘segments of one’ through the collection of the volume and variety of data that can empower you to pursue automated hyper-personalisation. When clients feel that your service is sensitive and responsive to their individual preferences, they will be happy to share more and more information with you. Think about a situation where you offer the client the chance to offer money to a charity through ‘rounding prices up’. For example, the client purchases a coffee for £1.79 and you offer to have the remaining £0.21 put into a charity pot, which, after the client has collected £20, this pot can be given to a charity, which you, as a bank, will match with an equal donation. Let’s say the client is a paediatrician. In this case, the three potential charities the client can choose from should be about health, children, and medical research. Another client is a music teacher, in which case the three choices can be related to classical music, early talent, and education. These elements of hyper-personalisation have to be fully automated and ideally be propelled by some levels of AI.


Understanding the convergence of IoT and data analytics

Understanding the convergence of IoT and data analytics image
Simply collecting IoT data is not enough — “Organisations need to turn this data into value in both a batch (using traditional analytics) and real-time context. It is also not desirable, nor possible in some cases, to do all of your processing at the enterprise level (in the cloud or data centre, for example).” As is the nature of IoT devices, decisions will often need to be made in a localised fashion, including on the device itself, and these decisions will be largely driven by models derived from analytical processes and historical data. “The ability to make the edge ‘smarter’, offload compute workloads to the edge for more efficient processing, support localised or independent/disconnected processing, reduce decision latency, and reduce data transfer requirements are all benefits that may be applied to almost any vertical,” continues Petracek. “Analytics, and the operationalisation of analytical models and pipelines, presents a huge opportunity to organisations, especially given the level of real-time information and context that IoT can provide.”


2020 is about digital optimization not digital transformation


Digital optimization isn’t easy as a result of completely different groups are prone to have invested in options that don’t communicate to one another and aren’t straightforward to combine with one another. Further, every group may very well be going digital at a special tempo – these on the front-line coping with clients on daily basis are beneath extra stress than these working the group’s core operations. However, with out digital optimization, organizations will likely be unable to eliminate the silos in its processes even when groups embrace collaboration within the spirit of digital innovation. ... The vital factor to recollect when optimizing digital investments is that the group has one purpose, one mission, and one imaginative and prescient. Hence, the roadmap have to be comprised of straightforward milestones that have an effect, ideally on the enterprise-level. At the tip of the day, organizations should perceive the significance of optimizing their investments in digital, and prioritize it over spends that merely broaden their digital portfolio.


Managing Trade-offs: Prediction, Adaptability and Resilience


One critical new way of working that CEOs must “bottle” is organizational learning through local experimentation and global scaling. Lockdown has not only liberated the CEO, it has also freed local leaders from top-down governance. Often asking for forgiveness, rather than permission, they’ve innovated, disrupted and bullied their way to solutions that surmount obstacles and serve customers. In doing so, local teams have found support from the center. Some global leaders helped scale top solutions across the firm. They reimagined marketing and sales budgets overnight, showing the organization what costs are critical and what are dispensable. They solved huge supply chain issues, teaching the organization how to strengthen its operations. In order to ensure that this burst of experimentation and learning doesn’t become a historical oddity, leading CEOs will systematically protect the fundamental new relationship between global and local. They will set a clear agenda for the core business (or, as we like to call it, “Engine 1”): Continue the same pace of experimentation and learning throughout the long dance.


EY: revolutionising supply chain management with blockchain

blockchain
While in traditional supply chains production is recorded digitally, when it comes to shipping Brody explains that maintaining information continuity across systems and enterprise boundaries is a challenge, there is “oceans of digital data but only islands of useful information.” The use of systems such as electronic data interchange (EDI) and XML messaging are being utilised by these companies to try and maintain information continuity, but even these system pose their own challenges such as being out of sync and moving data only one stop down the supply chain, “The result: inventory that seems to be in two places at once,” added Brody. “These systems were created for an era of big, vertically integrated companies with large, but mostly static supply chains.” Although relevant 30 years ago, in today's modern supply chain this is not the case. ... “Until the advent of bitcoin and blockchain technology, the only way you could get a large number of entities to agree upon a shared, truthful set of data, such as who has what bank balance, was to appoint an impartial intermediary to process and account for all transactions,” highlighted Brody.


Microsoft is suddenly recommending Google products

microsoft-march-2020-patch-tuesday-fixes-5e6a4fb210393e000182bb8f-1-mar-16-2020-16-46-38-poster.jpg
Not merely extensions, but great extensions. I'm tempted to suspect a lawyer may have written that. Or at least someone in the Google marketing department. Naturally, I asked Microsoft why it had suddenly lurched from prickly to cuddly. Could it be that Google and Microsoft had a kiss-and-make-up Zoom call -- I mean, a Microsoft Teams call? Or a Google Meet encounter? Microsoft declined to comment. Perhaps, you might think, Microsoft has stopped to play nice merely because that's its brand image these days. Or perhaps some Redmonder stopped to think that, indeed, Edge doesn't currently enjoy enough of its own extensions. My delvings into Redmond's innards suggest the latter may have driven the decision even more than the former. You really don't want to annoy your customers, do you? Especially when you can't currently offer them what they need. Of course, Edge is based on Google's Chromium platform. In my own experimentations, I've found it to be a more pleasant experience than Chrome. Just that little bit more responsive and generally brighter -- though I can't quite cope with Bing as my default search engine.


Expanding Data Governance into the Future


Recognition that good Data Governance has become a must has come none too soon. Donna Burbank, Managing Director at Global Data Strategy, notes that many companies are beginning or planning to begin a Data Governance program, including a broader range of industries than before. However, spreading an existing successful Data Governance framework in one business area does not necessarily translate across the entire enterprise, or even to another company. Freddie Mac tried several times to implement DG driven by IT, and nothing stuck until a next-generation proactive and collaborative Data Governance took hold. Unfortunately, many companies, like Freddie Mac, get stuck in old patterns, trying to evangelize rigid Data Governance practices, gumming up operations, and fostering mistrust. Firms in this situation, according to Derek Steer, CEO at Mode, end up governing the wrong amount of data (missing the highest priority data assets) or enforcing Data Governance poorly (spending too much or too little time maintaining Data Governance logic). The first steps include understanding lessons from initial DG processes, how DG has changed, and how the next generation works better to support the business.


Amazon Faces A New Opponent: Some Of Its Own Tech Employees

U.S. employees of Amazon, its supermarket subsidiary Whole Foods and supermarket delivery services were called to strike on May 1, taking advantage of May 1 to denounce employers accused of not sufficiently protecting them in the face of the pandemic. (Photo by VALERIE MACON / AFP) (Photo by VALERIE MACON/AFP via Getty Images)
Tech employees are speaking out for their blue-collar counterparts partly because the warehouse workers asked them to. Costa, who had been at the company for 15 years before she was fired, says warehouse workers reached out in March to the Amazon Employees for Climate Justice (AECJ), an internal group she co-founded two years ago, for help and support during the pandemic. “Tech workers are ‘a valued resource,’” Costa says. “They [Amazon management] see us as less expendable than warehouse workers because they know they can’t just throw more bodies at our seats if we leave. We have more leverage, and that’s why tech workers have much more privilege and have that much more responsibility to speak out.” AECJ organized a one-hour video call in mid-April during which warehouse workers could speak to Amazon tech employees who were interested to hear from them directly. The invite was sent out via Amazon’s internal e-mail system on Friday, April 10. “It got 1,550 accepts on a Friday afternoon, when New York, Europe and India were already off the clock,” Costa said.



Quote for the day:


"Leadership without character is unthinkable - or should be." -- Warren Bennis


Daily Tech Digest - May 09, 2020

Hiring Open: CISOs, CDOs and on-demand CIOs

Hiring Open: CISOs, CDOs and on-demand CIOs
“It is probably the first time in history that more than 25% of all IT roles being hired for are in information security,” said Suvarna Ghosh, Founding Partner, Maxima Group. “Earlier, the number of such roles would barely constitute less than 10% of all IT hirings.” Companies are focusing on hiring security specialists in the IT role as the focus is on augmenting the operational effectiveness of the team. “Companies are also introducing some new roles for CISO and assistant CISO positions. New roles are also being created because the focus is on ensuring that information security vulnerabilities can be better dealt with,” Ghosh adds. The typical criteria is an experience in security and network domain and the ability to ensure that the virtual workspace infrastructure is secure. ... While there is a surge in demand for CISOs and CDOs, CIOs may not be benefited from the trend. In fact, the surge in demand for the two roles could come at the expense of CIOs. The demand for CTO and other IT roles has gone down from 93% pre-Covid to 74% now, according to a survey conducted by Maxima Group whose clientele includes MNCs and VC-funded startups.



Riding the Cloud to the Bank of the Future

What is now critical is the value of co-development and co-innovation, as not all new market entrants need to be seen as competitors. The pace of change today means it is impossible to keep innovation in-house. Instead, incumbents are already starting to collaborate with fintechs and other third-party technology providers. This allows them to benefit from the expertise of companies which specialise in the innovative technologies that will help them compete, but which would be too expensive and take too long to develop internally. A cloud-enabled, platform-based approach is the launchpad from which banks and fintechs can collaborate and innovate more easily – creating new value and driving new market opportunities in an Open Banking world. Additionally, in a partnership ecosystem a cloud-based environment allows banks to safely test and explore partnerships – and to help regulators encourage innovation whilst ensuring consumer protection. The current situation has brought into sharp focus the need for established banks to learn from the digital disruptors.



Data lakes serve as vast collections of raw enterprise data. Using them involves migrating or copying raw data — structured, unstructured and semi-structured — as well as transformed data for specific purposes, such as analytics, visualization and reporting. Data lakes once held the promise of making it easier to ingest, combine and analyze diverse data in service of machine learning and artificial intelligence (AI) efforts. In reality, as data lakes host more and more data, it becomes difficult for users to know what's in them and how the data is connected. In essence, data lakes are raw data in a large distributed file system like the one on your PC. So, the whole enterprise spends as much time looking for stuff in the data lake as we do individually our laptops. You may not realize it, but you're probably already familiar with knowledge graphs since they're used extensively by Google, Facebook, Apple, LinkedIn, Uber and many others. Knowledge graphs connect data based on what it means, without duplicating or copying the data. This effectively allows companies to act on their data as if silos don't exist! It also carries the benefit of avoiding complex ETL (extract, transform, load) jobs and saving money on cloud hosting costs of duplicated data.


Resetting the 5G goalposts: How the US declares victory

Conceivably, if the component containing 5G radio technology were based on a completely open plan -- as much a "white box" for developers and engineers as the "clone" PCs of the 1980s -- then the barrier to entry for other players to get involved and become competitive, could be knocked down. The threat of vendor lock-in for any operator contracting with Huawei, Ericsson, or Nokia would evaporate, as Cisco, Samsung, and potentially others such as Qualcomm and Intel, could suddenly become options. Perhaps the name of the startup entering 5G's new open ring has yet to be concocted. Among the O-RAN Alliance's members are China Mobile, AT&T, Verizon, NTT DoCoMo, Sprint, Cisco, Dell Technologies, Facebook, Microsoft, Nokia, Ericsson, IBM, Intel, and ZTE. China's interests are well represented, along with America's and Europe's. Huawei, however, has been an active opponent of O-RAN, arguing since well before the pandemic that while cheaper up-front, a white box would be more expensive and difficult to maintain over time, and would never perform as well as its own components.


Five ways to prevent a ransomware infection through network security


First and foremost, patching needs to be under control. Many businesses struggle with this, especially with third-party patches for Java and Adobe products, and hackers love this. Until software updates are deployed in a timely fashion, the organization is a sitting duck. A network is just one click away from compromise. Effective malware protection is also a necessity. Steer away from the traditional and look more toward advanced malware tools including non-signature/cloud-based antivirus, whitelisting and network traffic monitoring/blocking technologies. Data backups are critical. Organizations' systems -- especially the servers that are at risk to ransomware infections -- are only as good as their last backup. Discussions around backups are boring, but they need to be well-thought-out to minimize the impact of the ransomware that does get through and encrypts critical assets. Network segmentation is another important part of ransomware protection, but it's only sometimes deployed properly.


Predicting Failing Tests with Machine Learning

From a machine learning perspective, the feature vectors necessary for supervised learning are formed of the metadata about commits (e.g. number of files or the average age of files, to name just two). This is taken from the source control system. The class we want to predict is the result of a test, taken from the test execution data. We bundle these data by test case, meaning we get one model for each test case and ML algorithm we evaluate. Features arise from domain knowledge and intuition, e.g. "larger code changes can break more things". At this point, the features are just descriptions of a commit, though. Whether they can explain a certain test case’s results depends on the other features present, the total training data, and the algorithms used for learning, and is reviewed by evaluating the machine learning system with separate test and training data. ... One interesting thing to note is the different questions we get when presenting the system at conferences. At a tester-focused conference, questions were (as expected) test case focused, e.g. "Can we judge a test case’s quality by whether it is easily predictable or not".


EA & Agile – Not Mutually Exclusive After All!


While it is a given that agile development has become the norm for software development, true business agility requires more than having scrum teams delivering working solutions. Moreover, if you only focus on the small-scale kind of agility provided by agile software development, you might miss the forest for the trees: why do you want to be agile as an enterprise, and what does that require? An enterprise is more than just a bunch of local developments by small teams. The pieces of the puzzle that these teams work on must fit together somehow. And hopefully there is a vision of the future, aligned with organization strategy, a set of goals that the organization aims for. That is where Enterprise Architecture enters the stage. Both approaches have their merits and shortcomings. EA without agile may lead to slow and bureaucratic organizations that does not respond fast enough to changes and trends, and only having a horde of scrum teams without some integrative, overarching approach may lead to a disconnected landscape consisting of agile silos. However, if we build on the strengths of both approaches, we can create enterprises that move as a united whole without having a central, command-and-control management that stifles local development and innovation.


The Difference Between Data Architecture and Enterprise Architecture

Although there is some crossover, there are stark differences between data architecture and enterprise architecture (EA). That’s because data architecture is actually an offshoot of enterprise architecture. In simple terms, EA provides a holistic, enterprise wide overview of an organization’s assets and processes, whereas data architecture gets into the nitty gritty. The difference between data architecture and enterprise architecture can be represented with the Zachman Framework. The Zachman Framework is an enterprise architecture framework that provides a formalized view of an enterprise across two dimensions. ... Good data leads to better understanding and ultimately better decision-making. Those organizations that can find ways to extract data and use it to their advantage will be successful. However, we really need to understand what data we have, what it means, and where it is located. Without this understanding, data can proliferate and become more of a risk to the business than a benefit.


To identity and beyond — One architect's viewpoint

example whiteboard conversation
We've been chasing the dream of single sign-on (SSO) for as long as I can remember. Some customers believe they can achieve this by choosing the "right" federation (STS) provider. Azure AD can help significantly to enable SSO capabilities, but no STS is magical. There are too many "legacy" authentication methods which are still used for critical applications. Extending Azure AD with partner solutions can address many of these scenarios. SSO is a strategy and a journey. You can't get there without moving towards standards for applications. Related to this topic is a journey to passwordless authentication which also does not have a magical answer. Multi-factor authentication (MFA) is essential today. Add to it user behavior analytics and you have a solution which prevents the majority of common cyber-attacks. Even consumer services are moving to require MFA. Yet, I still meet with many customers who do not want to move to modern authentication approaches. The biggest argument I hear is that it will impact users and legacy applications. Sometimes a good kick might help customers move along - Exchange Online announced changes. Lots of Azure AD reports are now available to help customers with this


COVID-19 Has a Data Governance Problem


Data governance problems also arise with how to illustrate and represent the standardized data to the public. There are several techniques that can be used to create a graph to represent the scale of the number of new cases by country. Two common types are a linear scale and a logarithmic scale. When there are numbers that are skewed towards high values, a logarithmic scale is preferable to show the change in rate in the number of new cases over time. The logarithmic scale is extremely useful in displaying large data, but it may not be understood by all readers. Without the proper labeling in this graph of data, a reader can be misled to believe that this is a linear scale. For example, in the logarithmic scale below of COVID-19 cases by different countries, the large number confirmed cases in the United States can be graphed with the smaller number of confirmed cases in Japan and South Korea. A linear scale is most often assumed for a line graph since it is the first method learned by students in primary school.



Quote for the day:


"The real problem is not whether machines think but whether men do." -- B. F. Skinner


Daily Tech Digest - May 08, 2020

Autonomous cars: The cybersecurity issues facing the industry


Most companies know that they alone can’t create the ethical decisions behind the software, which has to balance the safety of the passengers against the safety of people outside of the vehicle. The big challenge, then, lies in creating regulations that formalise the limits of reasonable decision-making so that companies can program the vehicles to act within these parameters. Another area to consider is the security of the firmware and software; not only does it face the typical threat of cyber attacks, but for self-driving vehicles, security means safety. Automakers must be able to ensure that their software and firmware is secure which is made more complex with the connectivity of an IoT system where one vulnerability could open up the system to further threats. At the same time, the software must be reliable to ensure that the cars can run continuously and not break down because of a glitchy update. Companies such as Tesla have a very security-conscious approach to development, with security testing and research part of the normal product development research and process. This is not always the case for the traditional automakers who, in contrast, don’t have as mature an approach to security.


The new cyber risk reality of COVID-19 operating mode

cyber risk reality
One of the things we are seeing right now is the importance of viewing cybersecurity in a business context. Job one is to sustain the activities and enable the organization to achieve its mission. That is not new, but many companies are getting a new perspective on the importance of cybersecurity as an enabler for the business. Security and risk leaders need to have the power to frame both cyber risk and cybersecurity controls in a business context. This allows for sound justification for spending and other priorities. It also means focusing on new risk priorities stemming from our current operating mode, making sure we are optimizing our controls to address those risks, and achieving real-time risk visibility as the times require. Marking a departure for many organizations that traditionally have relied on periodic assessments that quickly go stale, security and risk leaders can now leverage software and methodology to dynamically evaluate the new cyber risk reality of this operating mode and build the needed capabilities to control it. Some may think that we will never be able to do enough.


When two chains combine


In an increasingly digitised world, emerging technologies, such as blockchain, afford organisations the opportunity to drive business value throughout their supply networks. According to Eric Piscini, Principal and Global Blockchain Leader at Deloitte Consulting LLP in the US, supply chains across industries and countries will be reimagined, improved and disrupted by blockchain technologies. We now have safer and more efficient ways to connect with business partners as well as to track and exchange any type of asset. The ability to deploy blockchain technologies to create the next generation of digital supply chain networks and platforms will be a key element in business success. Building supply chain capabilities with digital technologies can result in greater levels of performance. Blockchain is an enabling technology, which is most effective when coupled with other next generation technologies such as Internet of Things (IoT), robotic cognitive automation or smart devices. In this paper, Deloitte’s blockchain and supply chain professionals share insights on how blockchain-enabled technology can mitigate four crossindustry supply chain issues — traceability, compliance, flexibility and stakeholder management. The paper draws on use cases from the pharmaceutical industry (product tracking), automotive industry (purchasing platform) and food industry (know your supplier).



Chinese Military Cyber Spies Just Caught Crossing A ‘Very Dangerous’ New Line

Chinese hacker in front of digital datastream flag
The military espionage group’s tactics, described by Check Point as “very dangerous,” involved hijacking diplomatic communication channels to target specific computers in particular ministries. The malware-laced communications might be sent from an overseas embassy to ministries in its home country, or to government entities in its host country. “The group has introduced a new cyber weapon crafted to gather intelligence on a wide scale, but also to follow intelligence officers directives to look for a specific filename on a specific machine.” Meet Naikon, a cyber reconnaissance unit with links to the People’s Liberation Army, outed in a ThreatConnect and Defense Group Inc. report in 2015. Back then, the group’s operations were described as “regional computer network operations, signals intelligence, and political analysis of the Southeast Asian border nations, particularly those claiming disputed areas of the energy-rich South China Sea.” And while Naikon has been seemingly quiet since then, nothing has changed. Check Point told me that it has actually been “penetrating diplomats’ PCs and taking over ministerial servers—making the group very successful in gathering intelligence from high-profile personnel and able to control critical assets.”


Data scientists often start out as business analysts and boost their math and analytics skills with additional courses or on-the-job training. Some also start out right in data science, with academic backgrounds in statistics or artificial intelligence. In addition to math and business domain knowledge, data scientists typically need programming skills to be able to develop prototypes of their models. R and Python are the most common programming languages for the job, but Scala, Julia, JavaScript, Swift, Matlab and Go can also be useful. Data scientists should also be familiar with data visualization tools like Power BI, Tableau and Qlik. Andrew Stevenson, CTO at Lenses.io, a company that offers data platform monitoring technology, once worked on a project with data scientists from an energy trading desk. "They were able to build the models, test and run locally," Stevenson said. And then they hit the limit of their expertise, he said. "The models were not production-grade. They had no monitoring, they weren't version controlled, they were not easily developed in a repeatable way.


Successful Digital Transformation Requires Data Transformation

istock 1181557977 1
In this context, data transformation doesn’t just encompass the traditional “extract, transform, load” processes of collecting, cleaning, reformatting, and storing data. It also includes the subsequent analysis and leverage of collected (or real-time) data to inform a company’s decision making, its operations, and its high-level digital transformation strategies. Everyone agrees that the massive amounts of digital data generated by business and consumer activity represents an incredibly valuable resource – at least theoretically. In practice, however, the ever-expanding data resource is underutilized today. In a survey of 190 U.S. executives, Accenture found that only 32% can realize tangible and measurable value from data. Even fewer – 27% – said data and analytics projects produce insights and recommendations that are highly actionable. Without data-driven insights, digital transformation initiatives are flying blind. By contrast, organizations that make good use of data can achieve a range of benefits.


The United States quietly concedes defeat on Huawei's 5G


The timing of this move given the circumstances is extremely odd. However, the conceding that Huawei will have a role in the setting of global 5G standards is an indication that the White House is now aware of the realities that are at play. The United States has effectively lost the 5G war against Huawei. Failing to get it blacklisted throughout the world, Washington is now resigned to the fact that the company will now dominate the standards of the next generate internet and therefore, it is now forced to ultimately work with it in doing so, than against it. The outcome marks a major strategic defeat for the United States on this issue. First of all, despite everything we are hearing from the U.S. right now, policy and rhetoric are different. As I have set out previously, many American politics are showcasing anti-China stances in the pursuit of electoral races and this does not always translate into practical policy outcomes. Trump sees opportunity in bashing China right now over the COVID-19 pandemic, however what he says and suggests does not tell us everything he will do in practice and thus it is important to read deep between the lines during this given period.


Protecting corporate data in popular cloud-based collaborative apps

protecting data cloud
Unfortunately, companies are not able to monitor all of the documents or data being shared across these apps. For example, Slack has private channels and direct messaging capabilities where admins cannot view what information is being shared unless they are a part of the conversation. As we have witnessed with previous data breaches, there is a risk that sensitive data will not always be shielded from anyone outside your organization. Slack previously experienced a data breach back in 2015 as a result of unauthorized users gaining access to the infrastructure where usernames and passwords were stored. Salesforce has also had security issues in the past exposing users stored data to third parties due to an API error. These are just a few instances that should serve as a stark warning to enterprises that they can’t rely solely on app providers to ensure the security of their data – they must implement their own proper security solutions and processes in tandem. While these cloud-based services have native security capabilities in place to protect the infrastructure against intrusions, the onus is on the enterprises using these tools to ensure files that are being stored and accessed in the cloud are secure.


Governance, Risk, Compliance and Security: Together or Apart?

Image: Olivier LeMoal - stock.adobe.com
"Even within IT, you have project risks, you have development risks, you have risks that are associated with audit and compliance, but they're not dealt with in a very comprehensive way," said Christine Coz, principal research advisor at Info-Tech Research Group. "The key thing is sponsorship at the right levels of people in those conversations and that there is a goal to sort of act as a subset of the board of directors to ensure from an oversight perspective that there's a management of controls in place, that risk acceptance is in line with corporate tolerances and that you have a consistent level of risk tolerance and acceptance across the enterprise." The digitization of everything necessitates the need for ERM, not only because digital businesses operate much faster than their analog counterparts, but because risk management is a brand issue. "When you have a lot of competition in an industry, which is where I think we are now, every product and service [is] replaceable, our car insurance, your mortgage, our telecom carrier, your food app, you name it," said Forrester's Valente.


Dell EMC, Pure Storage upgrade storage offerings

big data / data center / server racks / storage / binary code / analytics
In consolidating the best of breed, Dell claims PowerStore is up to seven times faster and three times more responsive than previous Dell EMC midrange storage arrays and is designed for six-nines (99.9999%) of availability. It can house up to 96 SSDs in a 2U chassis and uses both NVMe flash storage and Intel Optane SSDs. Dell promises a 4:1 compression and deduplication ratio. “Customers tell us a main obstacle keeping them from achieving their digital transformation initiatives is the constant tug-of-war between supporting the ever-increasing number of workloads – from traditional IT applications to data analytics – and the reality of cost constraints, limitations and complexity of their existing IT infrastructure,” says Dan Inbar, president and general manager, storage, Dell Technologies in a statement. “Dell EMC PowerStore blends automation, next generation technology, and a novel software architecture to deliver infrastructure that helps organizations address these needs.” PowerStore uses machine learning and intelligent automation for faster delivery of applications and services, claiming up to 99% less staff time by automating many features, like load and volume balancing or migrations.



Quote for the day:


“Great leaders don't need to act tough. Their confidence and humility serve to underscore their toughness” -- Simon Sinek


Daily Tech Digest - May 07, 2020

Is Passwordless Authentication the Future?

passwordenews
Implementing passwordless authentication platforms tend to be more complex in comparison to their credential-based counterparts, but the end user experience on a large-scale deployment is much simpler and more likely to be immediately adopted, Kothanath claims. "These devices and the data they collect and store, are becoming part of your digital identity. Your smartphone holds a number of attributes (phone number, IMEI number, carrier information, digital certificates, GPS location, manufacturer information, CPU unique ID, etc.) which can be used to uniquely authenticate you, negating the need for a password." It is extremely difficult to compromise these devices, he says, and technology is available today to enhance the security and reliability of device-based authentication. "As the value of the target asset increases, there can be other trusted devices, such as a Yubi Key and other hardware-based tokens, which can be governed under much tighter controls. All of this is trending towards a cutting-edge, identity-based authentication system and privilege management approach to eliminating passwords from the security equation," Kothanath notes.



Credit card skimmer caught hiding behind website favicon

Padlock on Top of Credit Cards on Keyboard Cyber Security Concept
Upon investigation, though, Malwarebytes discovered that the domain name of myicons.net was registered just a few days prior and hosted on a server previously identified as malicious. Further, myicons.net appropriated all its content from another site named iconarchive.com simply by pointing to that site within an HTML iframe. Digging further, Malwarebytes found that several e-commerce sites were loading an Adobe Magento favicon from the myicons.net domain. Though the security firm suspected that this favicon was malicious, it was unable to find any extra code inside it. However, it did uncover malicious activity on the e-commerce sites that were loading the Magento favicon from myicons.net. Instead of serving up an image file, the myicons.net server was actually loading code consisting of a credit card payment form. This form is loaded dynamically and overrides the PayPal checkout option with its own menu for MasterCard, Visa, Discover, and American Express cards. In the end, any credit card information entered through this form is then sent back to the criminals.


The chief digital officer and COVID-19


CDOs should be considering how to build in work flexibility to account for employees taking care of kids at home by, for example, shifting schedules; ensure access to resources such as tools and information-sharing intranets; educate less digitally fluent colleagues so they don’t feel overmatched by new demands with, for example, brief training sessions; and have frequent touchpoints such as digital town halls and pulse surveys, to gauge people’s mental and physical well-being. This goes beyond the typical work check-ins and is absolutely necessary to help employees deal with the unprecedented stress of this current environment. People are a company’s most precious resource, and how successful a CDO is in making sure that his or her employees are as healthy and supported as possible will be a testament to his or her true leadership skills. ... CDOs should emphasize design-thinking principles, which are predicated on building empathy with customers, to understand their motivations. We know of CDOs who are reaching out to customers for one-on-one conversations, leading customer interviews, and compiling surveys to better understand the challenges that customers face.


Open source database ScyllaDB 4.0 promises Apache Cassandra, Amazon DynamoDB drop-in replacement

Database table with server storage and network in datacenter background
ScyllaDB also brings some noteworthy features from a DevOps perspective. Change Data Capture (CDC) allows users to track changes in their data, recording both the original data values and the new values to records. Changes are streamed to a standard CQL table that can be indexed or filtered to find critical changes to data. Scylla Operator is a Kubernetes extension for Scylla cluster management. It currently supports deploying multi-zone clusters, scaling up or adding new racks, scaling down and monitoring Scylla clusters with Prometheus and Grafana. Both CDC and Scylla Operator are currently in Beta, expected to be fully rolled out soon, as per ScyllaDB's development model. Indeed, having watched ScyllaDB grow from relatively early in its lifecycle, we will have to ascertain the fact that it's catching up and adding new features at a rather fast pace. Laor mentioned they have a slew of more features in the works. When discussing what it is that enables ScyllaDB to make such rapid progress, Laor said that the company now employs about 100 people, and business has been growing well, too.



Remote access needs strategic planning right now

riverbed sdwan
Over the next two to four years, enterprises have the opportunity to strategically plan for a converged architecture that addresses both networking and security: the secure access service edge or SASE (pronounced “sassy”). SASE combines WAN capabilities with security, and delivers them via services based on identity, time, context, compliance with enterprise policies and risk assessment, according to Gartner, which created the term. Technology suppliers are moving rapidly to extending their network and security solutions from the data center and branch office to the remote office, and this could fit the SASE model. Employees working out of their houses need access to any application, from any device, from any location and on any available network. They use critical applications such as VoIP, video and SaaS that require fast, low-latency connections. And because this access is deployed widely, the solution must be easy to install, simple to operate, flexible and cost effective. Work-at-home users must have direct internet access to cloud-based applications to overcome performance and latency issues with traditional remote access VPNs that route traffic from the user to the data center to the cloud, back to the data center and finally back to the user.


COVID-19, Cyber Security and the “New Normal”

new normal
First, from a technology perspective, the large scale remote working experiment we are having to endure is simply working: Platforms have scaled, and networks have not collapsed. We may or may not like it, but we are starting to adjust to new ways of interacting. More generally, the digital economy has successfully scaled up at pace and the COVID-19 crisis has dramatically accelerated the digital transformation of many sectors. It is impossible to say what the long-term impact will be (e.g. to what extend will we continue to work from home), but this is bound to bring a positive outlook for the tech industry at large. Second, over the last six weeks and in the face of countless scams and fraud attempts, we have had in front of us the largest real-life cyber security awareness campaign anyone could ever have imagined, and this is bound to have a significant cultural impact on people, in particular if the lockdown continues or comes back. Cyber security has had to be on the agenda, as a necessary dimension of lives and business activities now entirely dependent on digital services. Nobody can risk a cyber-attack right now, and good cyber security measures have become key to keeping the lights on. One cannot imagine cyber security moving down the priority list with senior executives post-COVID.


How Tesla uses open source to generate resilience in modern electric grids

06-tesla-cover-story-1.jpg
"(The) majority of our microservices run in Kubernetes, and the pairing of Akka and Kubernetes is really fantastic," Breck said. "Kubernetes can handle coarse-grained failures in scaling, so that would be things like scaling pods up or down, running liveness probes, or restarting a failed pod with an exponential back off. Then we use Akka for handling fine-grained failures like circuit breaking or retrying an individual request and modeling the state of individual entities like the fact that a battery is charging or discharging." For modeling each site in software, this so-called digital twin, they represent each site with an actor. The actor manages state, like the latest reported telemetry from a battery and executes a state machine, changing its behavior if the site is offline and telemetry is delayed. It also provides a convenient model for distribution, concurrency, computation, and failover management. The programmer worries about modeling an individual site in an actor, and then the Akka runtime handles scaling this to thousands or millions of sites. It's a very powerful abstraction for IoT in particular, essentially removing the worry about threads, or locks, or concurrency bugs.


What does the new NHSX contact tracing app for coronavirus mean for data protection?

What does the new NHSX contact tracing app mean for data protection? image
Although elementary, automated contact tracing has significant practical limitations. Bluetooth is an imprecise tool, and it risks false positives such as proximity through a wall. Necessarily it is ‘blind’ to disease transmission in spaces vacated by infected individuals moments before, where no Bluetooth handshake between handsets would take place. Crucially, automated contact tracing relies on uptake. In the UK, 60% of the population would need to download the app for it to make a positive difference, and with 20% of Britain’s population estimated not to own a smartphone and many older devices with limited app capability, many people would be excluded. A further difficulty arises from the multiplicity of contact tracing apps currently under development – how will they work together? Moreover, once international travel resumes, will national contact tracing apps be interoperable? Finally, there is a risk that automated contact tracing will be seen as a panacea by ‘fanboys’ for utopian technological solutions, whereas in reality, it can only be part of the answer, along with adequate infection testing and traditional confirmatory contact tracing, which are essential components of any useful roll-out.


Industry 4.0 requires whole-of-business approach to be successful


Even though the IoT revolution actually started in industrial settings in the 1960s and '70s, operators today still limit measurements to what is easy to measure. They also tend to apply the analytics and data created by those measurements too narrowly. As operational technologies (OT) and IT continue to blend (with operational analytics, for example, taking place on cloud platforms and the output being subsequently shared with ERP and supply chain management systems) using all operational data and analytics to uncover if technology-led process improvements are actually achieving the objectives for which they were deployed is more important than ever. "What we saw in our test bed activities is all these parties need to be brought onto the same page in order to declare success," said said Jacques Durand, co-chair of IIC's Digital Transformation working group and lead author of the report. "Saying that you just want to reduce a product error rate … is one thing but you have to be precise: What product? When do you measure? There are many aspects of measuring the condition of what you are measuring." Because of the tight integration taking place between OT, IT, and business workflows, today's process improvement goals go well beyond the factory floor.


Data Gateways in the Cloud Native Era

Data Gateways in the Cloud Native Era
Microservices influence the data layer in two dimensions. First, it demands an independent database per microservice. From a practical implementation point of view, this can be from an independent database instance to independent schemas and logical groupings of tables. The main rule here is, only one microservice owns and touches a dataset. And all data is accessed through the APIs or Events of the owning microservice. The second way a microservices architecture influenced the data layer is through datastore proliferation. Similarly, enabling microservices to be written in different languages, this architecture allows the freedom for every microservices-based system to have a polyglot persistence layer. With this freedom, one microservice can use a relational database, another one can use a document database, and the third microservice one uses an in-memory key-value store. While microservices allow you all that freedom, again it comes at a cost. It turns out operating a large number of datastore comes at a cost that existing tooling and practices were not prepared for.



Quote for the day:


"Make heroes out of the employees who personify what you want to see in the organization." -- Anita Roddick