Daily Tech Digest - May 15, 2020

The Past, Present, and Future of API Gateways


AJAX (Asynchronous JavaScript and XML) development techniques became ubiquitous during this time. By decoupling data interchange from presentation, AJAX created much richer user experiences for end users. This architecture also created much “chattier” clients, as these clients would constantly send and receive data from the web application. In addition, ecommerce during this era was starting to take off, and secure transmission of credit card information became a major concern for the first time. Netscape introduced Secure Sockets Layer (SSL) -- which later evolved to Transport Layer Security (TLS) -- to ensure secure connections between the client and server. These shifts in networking -- encrypted communications and many requests over longer lived connections -- drove an evolution of the edge from the standard hardware/software load balancer to more specialized application delivery controllers (ADCs). ADCs included a variety of functionality for so-called application acceleration, including SSL offload, caching, and compression. This increase in functionality meant an increase in configuration complexity.


Adapting Cloud Security and Data Management Under Quarantine

Image: WrightStudio - stock.Adobe.com
The current state of affairs is not something envisioned by many business continuity plans, says Wendy Pfeiffer, CIO of Nutanix. Most organizations are operating in a hybrid mode, she says, with infrastructure and services running in multiple clouds. This can include private clouds, SaaS apps, Amazon Web Services, Microsoft Azure, and Google Cloud Platform. Though this specific situation may not have been planned for, the cloud allows for unexpected needs to scale and pivot, Pfeiffer says. “Maybe we envisioned a region being inaccessible but not necessarily every region all at once.” Normally it can be easy to declare standards within IT, she says, and instrument an environment to operate in line with those standards to maintain control and security. Losing control of that environment under quarantines can be problematic. “If everyone suddenly pivots to work from home, then we no longer control the devices people use to access the network,” Pfeiffer says. Such disruption, she says, makes it difficult to control performance, security, and the user experience.


While 78 per cent of organisations said they are using more than 50 discrete cybersecurity products to address security issues, 37 per cent used more than 100 cybersecurity products. Organisations who discovered misconfigured cloud services experienced 10 or more data loss incidents in the last year, according to the report. IT professionals have concerns about cloud service providers. Nearly 80 per cent are concerned that cloud service providers they do business with will become competitors in their core markets. "Seventy-five per cent of IT professionals view public cloud as more secure than their own data centres, yet 92 per cent of IT professionals do not trust their organization is well prepared to secure public cloud services," the findings showed. Nearly 80 per cent of IT professionals said that recent data breaches experienced by other businesses have increased their organization's focus on securing data moving forward.


Continuous Security Through Developer Empowerment

Continuous Security Through Developer Empowerment
Before DevOps kicked in, app performance monitoring (APM) was owned by IT, who used synthetic measurements from many points around the world to assess and monitor how performant an application was. These solutions were powerful, but their developer experience was horrible. They were expensive, which limited tests developers could run. They excelled in explaining the state through aggregating tests, but offered little value to a developer trying to troubleshoot a performance problem. As a result, developers rarely used them. Then, New Relic came on the scene, introducing a different approach to APM. Their tools were free or cheap to start with, making it accessible to all dev teams. They used instrumentation to offer rich results in developer terms (call stacks, lines of code), making them better for fixing problems. This new approach revolutionized the APM industry, embedded performance monitoring into dev practices and made the web faster. The same needs to happen for application security.


Data security guide: Everything you need to know

The move to the cloud presents an additional threat vector that must be well understood in respect to data security. The 2019 SANS State of Cloud Security survey found that 19% of survey respondents reported an increase in unauthorized access by outsiders into cloud environments or cloud assets, up 7% since 2017. Ransomware and phishing also are on the rise and considered major threats. Companies must secure data so that it cannot leak out via malware or social engineering. Breaches can be costly events that result in multimillion-dollar class action lawsuits and victim settlement funds. If companies need a reason to invest in data security, they need only consider the value placed on personal data by the courts. Sherri Davidoff, author of Data Breaches: Crisis and Opportunity, listed five factors that increase the risk of a data breach: access; amount of time data is retained; the number of existing copies of the data; how easy it is to transfer the data from one location to another -- and to process it; and the perceived value of the data by criminals.


This new, unusual Trojan promises victims COVID-19 tax relief


The malware is unusual as it is written in Node.js, a language primarily reserved for web server development. "However, the use of an uncommon platform may have helped evade detection by antivirus software," the team notes. The Java downloader, obfuscated via Allatori in the lure document, grabs the Node.js malware file -- either "qnodejs-win32-ia32.js" or "qnodejs-win32-x64.js" -- alongside a file called "wizard.js." Either a 32-bit or 64-bit version of Node.js is downloaded depending on the Windows system architecture on the target machine. Wizard.js' job is to facilitate communication between QNodeService and its command-and-control (C2) server, as well as to maintain persistence through the creation of Run registry keys. After executing on an impacted system, QNodeService is able to download, upload, and execute files; harvest credentials from the Google Chrome and Mozilla Firefox browsers, and perform file management. In addition, the Trojan can steal system information including IP address and location, download additional malware payloads, and transfer stolen data to the C2.


Quantum computing analytics: Put this on your IT roadmap


"There are three major areas where we see immediate corporate engagement with quantum computing," said Christopher Savoie, CEO and co-founder of Zapata Quantum Computing Software Company, a quantum computing solutions provider backed by Honeywell. "These areas are machine learning, optimization problems, and molecular simulation." Savoie said quantum computing can bring better results in machine learning than conventional computing because of its speed. This rapid processing of data enables a machine learning application to consume large amounts of multi-dimensional data that can generate more sophisticated models of a particular problem or phenomenon under study. Quantum computing is also well suited for solving problems in optimization. "The mathematics of optimization in supply and distribution chains is highly complex," Savoie said. "You can optimize five nodes of a supply chain with conventional computing, but what about 15 nodes with over 85 million different routes? Add to this the optimization of work processes and people, and you have a very complex problem that can be overwhelming for a conventional computing approach."


COBIT Tool Kit Enhancements

The value of this tool is that it provides a convenient means of quickly assessing and assigning relevant roles to practices across the 40 COBIT objectives. COBIT promotes using a common language and common understanding among practitioners. Common terminology facilitates communication and mitigates opportunities for error. Using RACI charts and the new COBIT Tool Kit spreadsheet provides the guidance to help practitioners extract the COBIT practices relevant for each job role. Another benefit of compiling all practices into a single RACI chart is that metrics reporting can be better assessed. A user can filter all practices by accountability of a single role and then compare metrics reporting on those practices and determine whether sufficient coverage has been created. An assessment of that type is not as effective when RACIs are developed at the higher, objective, level. The new spreadsheet can be found in the complementary COBIT 2019 Tool Kit. The tool kit is available on the COBIT page of the ISACA website.


Build your own Q# simulator – Part 1: A simple reversible simulator


Simulators are a particularly versatile feature of the QDK. They allow you to perform various different tasks on a Q# program without changing it. Such tasks include full state simulation, resource estimation, or trace simulation. The new IQuantumProcessor interface makes it very easy to write your own simulators and integrate them into your Q# projects. This blog post is the first in a series that covers this interface. We start by implementing a reversible simulator as a first example, which we extend in future blog posts. A reversible simulator can simulate quantum programs that consist only of classical operations: X, CNOT, CCNOT (Toffoli gate), or arbitrarily controlled X operations. Since a reversible simulator can represent the quantum state by assigning one Boolean value to each qubit, it can run even quantum programs that consist of thousands of qubits. This simulator is very useful for testing quantum operations that evaluate Boolean functions.


Diligent Engine: A Modern Cross-Platform Low-Level Graphics Library

This article describes Diligent Engine, a light-weight cross-platform graphics API abstraction layer that is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C/C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. ... As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions, etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors.



Quote for the day:


"Different times need different types of leadership." -- Park Geun-hye


Daily Tech Digest - May 14, 2020

10 things you thought you knew about blockchain that are probably wrong

Blockchain technology on blue background
Blockchain and DLT mean the same thing: Not so much. A blockchain is just one type of DLT. There are many such technologies, and not all of them are blockchains. Just like using the term Xerox to describe all photocopies, "blockchain" is being used to refer to all types of DLTs regardless of underlying technology or architecture but, at this point in the technology's evolution, it's a distinction without a difference, Bennett said. This is why the report itself references all DLTs as blockchains. ... Blockchains will eliminate the need for intermediaries in transactions: While they may change the role of these individuals and organizations, DLTs will not eliminate the role they play in facilitating, verifying, or closing transactions. "The only way to cut out third parties is for a consumer or business to interact with a blockchain directly," the report said. "But even in scenarios where ecosystem partners deal directly with each other at the expense of existing third parties, it doesn't mean third parties will no longer be part of the mix. And let's not forget that the world of cryptocurrencies is full of trusted third parties in the shape of wallet providers and cryptocurrency exchanges."



A Hybrid Approach to Database DevOps


Redgate’s state-based deployment approach uses a schema comparison engine to generate a ‘model’ of the source database, from the DDL (state) scripts, and then compares this to the metadata of a target database. It auto-generates a single deployment script that will make the target the same as the source, regardless of the version of the source and target. If the target database is empty, then the auto-generated script will contain the SQL to create all the required objects, in the correct dependency order, in effect migrating a database at version “zero”, to the version described in the source. This approach works perfectly well for any development builds where preserving existing data is not required. If the current build becomes a candidate for release, and we continue with the same approach, then the tool would generate a deployment script that will modify the schema of any target database so that it matches the version represented by the release candidate. However, if the development involves making substantial schema alterations, such as to rename tables or columns, or split tables and remodel relationships then it will be impossible for the automated script to understand how to make them while preserving existing data.


Health Data Breach Update: What Are the Causes?

Health Data Breach Update: What Are the Causes?
Security and privacy teams need to be ready to deal with staff departures, security experts say. "We cannot presume to know the reason for the doctor moving to a different organization, but what is often not mentioned in any type of privacy or security training is 'whose information is it, anyway?''' says Susan Lucci, senior privacy and security consultant at tw-Security. "Some providers may assume that once they treat patients, they have rights to all their information. It appears that in this case, the physician downloaded only information that would be beneficial to alert the patient of the physician's new practice, not that it was downloaded for continuity of care. The personally identifiable information belongs to the facility, and they have a duty to protect it. Release of any confidential information must take place through appropriate channels and authorization." As healthcare entities and their vendors continue to deal with the COVID-19 crisis, new circumstances for breaches could emerge, some experts note.


Why Data Quality Is Critical For Digital Transformation

Bi gData
Often in the case of mergers, companies struggle the most with the consequences of poor data. When one company’s Customer Relationship Management (CRM) system is messed up, it affects the entire migration process – where time and effort is supposed to be spent in understanding and implementing the new system, it’s spent in sorting data!  What exactly constitutes poor data? Well, if your data suffers from: Human input error such as spelling mistakes, typos, upper- and lower-case issues, lack of consistency in naming conventions across the data set; Inconsistent data format across the data set such as phone numbers with and without a country code or numbers with punctuation; Address data that is invalid or incomplete with missing street names or postcodes; and Fake names, addresses or phone numbers …then it’s considered to be flawed data.  These are considered surface issues that are inevitable and universal – as long as you have humans formulating and inputting the data errors will occur. 


Cisco, others, shine a light on VPN split-tunneling

VPN / network security
Basically split tunneling is a feature that lets customers select specific, engerprise-bound traffic to be sent through a corporate VPN tunnel. The rest goes directly to the Internet Service Provider (ISP) without going throuogh the tunnel. Otherwise all traffic, even traffic headed for sites on the internet, would go through the VPN, through enterprise security measures and then back out to the internet.The idea is that the VPN infrastructure has to handle less traffic, so it performs better. Figuring out what traffic can be taken out of the VPN stream can be a challenge that Cisco is trying to address with a relatively recent product. It combines tellemetry data gathered by Cisco AnyConnect VPN clients with real-time report generation and dashboard technology from Splunk.Taken together the product is known as Cisco Endpoint Security Analytics (CESA) and is part of the AnyConnect Network Visibility Module (NVM). Cisco says that until July 1, 2020, CESA trial licenses are offered free for 90 days to help IT organizations with surges in remote working.


How to control access to IoT data

How to control access to IoT data image
Companies also shouldn’t forget to consider security measures that they have in place for other areas of the business, and think twice before relying on settings already applied to devices without checking. “IT teams cannot forget to apply basic IT security policies when it comes to controlling access to IoT generated data,” Simpson-Pirie continued. “The triple A process of access, authentication and authorisation should be applied to every IoT device. It’s imperative that each solution maintains a stringent security framework around it so there is no weak link in the chain. “Security has long been a second thought with IoT, but the stakes are too high in the GDPR era to simply rely on default passwords and settings.” Security is, by no means, the only important aspect to consider when controlling access to IoT data; there are also the matters of visibility, and having a backup plan for when security becomes weakened. For Rob McNutt, CTO at Forescout, the latter can come to fruition by segmenting the network. “Organisations need to have full visibility and control over all devices on their networks, and they need to segment their network appropriately,” he said.


Nvidia & Databricks announce GPU acceleration for Spark 3.0


The GPU acceleration functionality is based on the open source RAPIDS suite of software libraries, themselves built on CUDA-X AI. The acceleration technology, named (logically enough) the RAPIDS Accelerator for Apache Spark, was collaboratively developed by Nvidia and Databricks (the company founded by Spark's creators). It will allow developers to take their Spark code and, without modification, run it on GPUs instead of CPUs. This makes for far faster machine learning model training times, especially if the hardware is based on the new Ampere-generation GPUs, which by themselves offer 5-fold+ faster training and inferencing/scoring times than their Nvidia Volta predecessors. Faster training times allow for greater volumes of training data, which is needed for greater accuracy. But Nvidia says the RAPIDS accelerator also dramatically improves the performance of Spark SQL and DataFrame operations, making the GPU acceleration benefit non-AI workloads as well. This means the same Spark cluster hardware can be used for both data engineering/ETL workloads as well as machine learning jobs.


Nation state APT groups prefer old, unpatched vulnerabilities


“The recent diffusion of smart working increased enormously the adoption of SaaS solutions for office productivity, customer service, financial administration, and other processes. This urgency also increased as well the exposure of misconfigured or too permissive rights. All this has been leveraged by attackers to their advantage,” he said. “A solid vulnerability management, detection, and response workflow that included the ability to validate cloud security posture and compliance with CIS benchmarks – while shortening the Time To Remediate (TTR) would have been a great help for security teams,” said Rottigni. “The mentioned vulnerabilities made their ways in these sad hit parades as the most exploited ones: a clear indicator of the huge room for improvement that organisations still have.” “They can achieve this with properly orchestrated security programs, leveraging SaaS solutions that have the fastest adoption path, the shortest learning curve and the highest success rate in risk mitigation due to their pervasiveness across the newest and widest digital landscapes.”


Evolving IT into a Remote Workforce

Image: REDPIXEL - stock.adobe.com
When remote work initiatives first began rolling out 20 years ago, I recall a telecom sales manager telling me that six months after he'd deployed his sales force to the field where they all worked out of home offices, he discovered a new problem: He was losing cohesion in his salesforce. “Employees wanted to come in for monthly meetings,” he said. “It was important from a team morale standpoint for them to interact with each other, and for all of us to remind each other what the overall corporate goals and sales targets were.” The solution at that time was to create monthly on-prem staff meetings where everyone got together. A similar phenomenon could affect IT workforces that take up residence in home offices to perform remote work. There could be breakdowns in IT project cohesion without the benefit of on-prem “water cooler” conversations and meetings that foster lively information exchanges. In other cases, there could be some employees who don't perform as well in a home office as they would in the company office. IT managers are likely to find that their decisions on what IT can be done remotely will be based on not only what they could outsource, but also whom.


AI: A Remedy for Human Error?


Humans are naturally prone to making mistakes. Such errors are increasingly impactful in the workplace, but human error in the realm of cybersecurity can have particularly devastating and long-lasting effects. As the digital world becomes more complex, it becomes much tougher to navigate – and thus, more unfair to blame humans for the errors they make. Employees should be given as much help and support as possible. But employees are not often provided with the appropriate security solutions, so they resort to well-intentioned workarounds in order keep pace and get the job done. As data continues to flow faster and more freely than ever before, it becomes more tempting to just upload that document from your personal laptop, or click on that link, or share that info to your personal email. Take, for instance, one of the most common security problems: phishing emails. An employee might follow instructions in a phishing email not only because it looks authentic, but that it conveys some urgency. Employee training can help reduce the likelihood of error, but solving the technological shortcoming is more effective: if a phishing email is blocked from delivery in the first place, we can help mitigate the human error factor.



Quote for the day:


"Leadership is intangible, and therefore no weapon ever designed can replace it." -- Omar N. Bradley


Daily Tech Digest - May 13, 2020

What is Chaos Monkey? Chaos engineering explained

What is Chaos Monkey? Chaos engineering explained
The principles of chaos engineering have been formally collated by some of the original authors of Chaos Monkey, defining the practice as: “The discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production.” In practice this takes the form of a four-step process: Defining the “steady state” of a system to set a baseline for normal behavior; Hypothesize that this steady state will continue in both the control group and the experimental group; Introduce variables that reflect real world events like servers that crash, hard drives that malfunction, or network connections that are severed; and Try to disprove the hypothesis by looking for a difference between the control group and the experimental group. If the steady state is difficult to disrupt, you have a robust system; if there is a weakness then you have something to go and fix. “In the five years since ‘The Principles’ was published, we have seen chaos engineering evolve to meet new challenges in new industries,” Jones and Rosenthal observe.



Companies across EMEA lack tools to unlock data opportunity

Companies across EMEA lack tools to unlock data opportunity image
The lack of relevant skills and knowledge within companies is another major concern. Over a third (36%) of those surveyed said “not having the skills in place to manage the explosion of data” was one of their biggest concerns. Meanwhile the biggest fear among respondents is that “our employees won’t align with data policies”, cited by 28%. To improve their skills and knowledge base around data, the report recommends that organisations focus on upskilling existing employees with deep sector knowledge, hiring a chief data officer with specific responsibility to organise and extract value from data, and creating data governance groups that include decision-makers from across all key business functions, ensuring their needs are reflected in data strategy and management. “A lot of the most precious knowledge is already within a company,” says report contributor and tech philosopher Tom Chatfield. “Skilling up your employees so they can have conversations with computer scientists and use APIs is often much more valuable than asking a computer science PhD to quickly gain an understanding of a new sector.”


Microsoft: Bosque is a new programming language built for AI in the cloud


Bosque, according to Marron, is about designing an IR that overcomes challenges to automated program reasoning. Marron says Bosque's purpose is to "connect a constrained core-language IR to a developer friendly and high-productivity programming experience" in a way that allows developers to take advantage of cloud programming stacks – which include languages like TypeScript – without losing the ability to analyze code. He'll also address the question: "Can we go beyond simply matching the productivity of mainstream languages in this space and improve on the state of the art beyond improved tooling with novel language features?" Marron also reckons that Bosque could be particularly adept at using hardware accelerators such as field-programmable gate arrays (FPGAs), with which AWS, Google, and Microsoft have been loading up their clouds to support machine-learning workloads. While Bosque currently lacks I/O and runtime utilities, the language has attracted interest from banking giant Morgan Stanley, which is exploring Bosque for Morphir.


In A Time Of Crisis, A Global Lockdown Needs A Digital Unlocking

In a time of crisis, a global lockdown needs a digital unlocking - CIO&Leader
Even before the current crisis, we were seeing huge investments in new streaming services. But what’s happening now, in response to lockdowns around the world, will change the game in many areas of activity. The current transformation of attitudes, processes and systems will continue to echo through the post-Corona era. People are taking the time to keep in touch with their loved-ones on a more regular basis. To value the time they have together, despite the distance. Many employers are looking into how remote working benefits business continuity and supports their employees to master challenges. But beyond this, decision makers are also beginning to recognize the long-term benefits of a more profound digital transformation. Companies are taking a long, hard look at how they manage their offices, how staff interact, how teams collaborate, what business travel is actually essential, whether meetings can be reconceived to be more productive. They are becoming aware of how the move online can unlock the potential to save money and increase revenues.


What crashing COBOL systems reveal about applications maintenance


COBOL is also continually updated -- most recently in September 2019 -- and it currently handles 95% of the world’s ATM swipes with no problem. The real problem? These COBOL applications and the COBOL developer experience have been allowed to languish -- and what we’re seeing right now is the direct result of some states’ failure to properly update and maintain critical COBOL code. When this code was needed the most, this failure became evident -- at the worst possible time. The hard lesson learned is that tech systems aren’t something that can be set up once and never looked after again. This is particularly true in the case of so-called legacy systems. It’s not that these systems aren’t up for the job -- quite the contrary -- it’s just that they can’t be expected to keep up with ballooning transaction volumes on the front-end, with absolutely no care and feeding on the back-end. COBOL developers cannot keep these systems up-to-date if they are not provided with a modern, familiar developer experience that enables them to be comfortable coding on the mainframe. The private sector, unlike the government sector, acknowledges the increasing demand.


Three Years After WannaCry, Ransomware Accelerating While Patching Still Problematic

If there is a lesson from the WannaCry incident, it's this: Companies that use outdated systems and do not rigorously patch those systems are at risk, not just for data breaches — which firms have historically shrugged off — but for attacks by operations-disrupting ransomware. Unfortunately, many companies continue to ignore those lessons and are still using out-of-date software that is vulnerable to destructive attacks, said Jacob Noffke, senior principal cyber engineer at Raytheon Intelligence & Space, in a statement sent to Dark Reading.  "Many have upgraded older operating systems, aggressively patched their systems, better isolated unpatched systems behind firewalls, and have sound backup solutions to minimize the impact and chance that ransomware will wreak havoc on their networks in the future," he said. "But, unfortunately, not all organizations have taken note — and as ransomware attacks continue to evolve, those with weaker defenses will be a prime target for cybercriminals looking to capitalize on WannaCry-inspired attacks."


The cybersecurity sector needs more women: Here's why

The cybersecurity sector needs more women: Here
Gone are the days when technical professions were still seen as a man's business. More and more women are interested in the field of technology - and are increasingly creating leadership positions. The “Women in Cybersecurity” study recently published by the SANS Institute deals with precisely this topic, namely the current situation of female decision-makers in cybersecurity. In the survey, women in management roles as IT security experts report on their work and professional career, which has led them into this field. One of the results of the SANS report is that there are a number of ways to get into this profession. For 41 percent of those surveyed, entering a senior position in cybersecurity was primarily a question of whether they were in the right place at the right time. If you want to take your luck into your own hands, you should focus on certificates: For 34 percent of the participants in the study, certificates acquired were relevant to the advancement of their career. Mentoring can also play an important role in professional development.


Can Pandemic-Induced Job Uncertainty Stimulate Automation?

The pandemic-induced job uncertainty has markedly different effects from a reduction in the level of labor productivity. Both types of shocks generate a recession, but they operate through different channels. The uncertainty shock reduces aggregate demand and therefore pushes up unemployment and lowers inflation. In contrast, the first-moment shock reduces potential output and therefore raises both unemployment and inflation. More importantly, the two types of shocks have different impacts on automation. While uncertainty about worker productivity induces firms to shift technology toward automation and thus increases robot adoption, a negative labor productivity shock generates a large recession that reduces the incentive to automate. ... Facing increased job uncertainty, firms would want to shift the production technology from using workers to using robots. At the same time, the uncertainty shock reduces aggregate demand, and the recessionary effects make it less attractive to adopt robots.


Coronavirus Diary: the impact on senior level recruitment in the technology industry

Coronavirus Diary: the impact on senior tech recruitment image
As you’d expect, HR is busier than ever. We’ve seen a number of chief people officer roles push ahead and almost every tech business that needs a HR leader needs help realigning pay structures, implementing furlough schemes and, in some cases, hiring and onboarding new staff remotely. Whilst some of this might sound negative, it’s not. A lot of tech businesses are making the smart move and securing leaders who can plan workforces for every eventuality, including future growth and restructuring, or phases of both. Like HR, we had a number of CTO or VP (software) engineering roles in play that pressed ahead in spite of coronavirus. The resilience and scalability of both B2B and consumer platforms are being tested heavily right now, and the tech firms who own them want to ensure they can handle the pressure, and in some circumstances, scale-up in a condensed timeframe. This responsibility falls at the feet of the CTO. These are incredibly busy people right now.


Digital transformation: It's time to invent the future we want

20200508-briansolis-karen.jpg
In any disruptive event, it's natural to batten down the hatches, control costs, and explore ways to cut costs. And unfortunately, we see this in every level from employees, to assets, to technology. The CFO is essentially in control right now, and rightly so. The thing, though, is that before the pandemic, I used to talk about it in a loving way, of course, this, I called it out-of-touch-ness, which was essentially, executives in many ways focused on shareholder value, on making decisions based on the matrix, or spreadsheets, or visualized data. And we sort of lost the humanity in a lot of this. When you're driven by numbers, you work toward those numbers. I actually believe that if you put the CFO in control, the CFO's going to make decisions, of course, based on costs and numbers, like they should. But this isn't necessarily just a time to say, "OK. We're going to hold down the fort, cut costs. And then when this all clears, we're going to come back out stronger than ever," because we're not going back to normal. And normal was actually part of the problem to begin with.



Quote for the day:


"Leadership is an opportunity to serve. It is not a trumpet call to self-importance." - J. Donald Walters


Daily Tech Digest - May 12, 2020

Is it time to believe the blockchain hype?

Is it time to believe the blockchain hype?
While much criticised at the outset and subsequently watered down to appease regulators, Libra has also triggered discussion around central bank digital currencies (CBDCs) with almost every major central bank announcing their attention to explore the possibilities of these. Among the numerous benefits to CBDCs, the most oft-repeated is addressing the decline in the use of cash, something which has accelerated this year with more shopping taking place online and bricks-and-mortar retailers ceasing to accept paper money. According to a recent report by campaign group Positive Money the disappearance of cash would lead to an essential privitisation of money with commercial banks holding an oligopoly over digital money and payment systems. Such a situation would also prove damaging for the unbanked population, which still totals an estimated 1.7 billion people worldwide. While these people do not have access to the current financial systems due to not being able to prove their identities, they could use digital currencies provided they have a mobile phone and an internet connection.



Banks failing to protect customers from coronavirus fraud


A paltry 13 out of the 64 banks accredited by the UK government for its Coronavirus Business Interruption Loan Scheme (CBILS) have bothered to implement the strictest level of domain-based messaging authentication, reporting and conformance – or Dmarc – protection to stop cyber criminals from spoofing their identity to use in phishing attacks. This means that 80% of accredited banks are unable to say they are proactively protecting their customers from fraudulent emails, and 61% have no published Dmarc record whatsoever, according to Proofpoint, a cloud security and compliance specialist. Domain spoofing to pose as a government body or other respected institution, such as a provider of financial services, is a highly popular method used by cyber criminals to compromise their targets. Using this technique, they can make an illegitimate email appear as if it is coming from a supposedly completely legitimate email address, which neatly gets around one of the most obvious ways people have of spotting a phishing email – the address does not match the institution in any way.


Flattening The Curve On Cybersecurity Risk After COVID-19

PC key with the red word Covid-19
This is an opportunity but also a big risk for them. Many of them know their digital business system is vital to helping them navigate this change. But periods of disruption, whether driven by good or bad circumstances, present opportunities for hackers. So that cybersecurity risk gap I talked about earlier between threats and defensibility isn’t going to close naturally; that curve isn’t flattening. New cybersecurity risks are going to continue to emerge, and defensive capabilities have to continue to try to stay ahead. A common question that a lot of board members ask, is “Are we spending the right amount on cybersecurity?” That’s the wrong question. The right question is, “What do we need to protect, what’s the value of what we are trying to protect, and how secure is it for what we’re spending?” That’s their challenge heading into what could be massive waves of systemic change. The business value that their digital business systems drive is only increasing, and the threats to that value are only going to go up.


Architecture Decision for Choosing Right Digital Integration Patterns – API vs. Messaging vs. Event

Figure 2 – Messaging based integration
Direct Application Programming Interface (API), allows two heterogeneous applications to talk to each other. For example, each time we use an app on our mobile devices, the app is likely making several API calls to various digital services. Direct APIs can be designed to be Blocking (Synchronous) or Non-Blocking (Asynchronous). Of these, Non-Blocking APIs are preferred to ensure resources are not blocked when the consumer is waiting for a response from provider. Non-blocking APIs also help create independently scalable integration model between API Consumers and API Provider ... A Message is fundamentally an asynchronous mode of communication between two applications — it is an indirect invocation, such that two applications do not directly connect to each other. Thus, the Messaging technique decouples the consumer and provider, and removes the need of provider being available at exact same point in time as the consumer. It also addresses the scalability limitations of the provider.


Machine learning algorithms explained

Machine learning algorithms explained
Machine learning algorithms train on data to find the best set of weights for each independent variable that affects the predicted value or class. The algorithms themselves have variables, called hyperparameters. They’re called hyperparameters, as opposed to parameters, because they control the operation of the algorithm rather than the weights being determined. The most important hyperparameter is often the learning rate, which determines the step size used when finding the next set of weights to try when optimizing. If the learning rate is too high, the gradient descent may quickly converge on a plateau or suboptimal point. If the learning rate is too low, the gradient descent may stall and never completely converge. Many other common hyperparameters depend on the algorithms used. Most algorithms have stopping parameters, such as the maximum number of epochs, or the maximum time to run, or the minimum improvement from epoch to epoch. Specific algorithms have hyperparameters that control the shape of their search. For example, a Random Forest Classifier has hyperparameters for minimum samples per leaf, max depth, minimum samples at a split, minimum weight fraction for a leaf, and about 8 more.


COVID-19 Impact on the Future of Fintech

Besides their age, scalability and financial condition, the outlook of many fintech organizations will also be driven by the product category they are in. This is especially true in the near term, when the impact of the pandemic on consumer behavior is expected to be the greatest. According to BCG, negative impact of COVID-19 will be more severe for those fintechs in international payments, unsecured and secured consumer lending, small business lending and for those where risks may be highest. It is believed that those fintech firms focused on B2B banking are less vulnerable as a group. ... As could be expected, technology providers were some of the early winners when COVID-19 hit as traditional banking organizations scurried to deploy digital solutions to meet consumer demand. Many of the sales were initiatives already agreed to but not yet implemented until market conditions required immediate action. It will be interesting to see if investment in technology and digital solutions continues as traditional financial institutions are forced to reduce costs.


Simplicity and Security: What Commercial Providers Offer for the Service Mesh


Whatever the maturity level, one of the advantages of a commercial offering is support. There’s no easy way to get advice or troubleshooting from purely open-source service meshes. For some organizations that doesn’t matter, but for others, the knowledge that there’s someone to call in case of a problem is critical — and might even be baked into corporate governance policies. One of the benefits of using a sidecar proxy service mesh with Kubernetes, Jenkins said, is that it allows a smaller central platform team to manage a large infrastructure, and it reduces the burden on application developers to manage anything related to infrastructure management. Using a commercial service mesh provider lets organizations even further reduce the need to manage infrastructure internally, he says. Austin agreed that one of the things that makes a service mesh “enterprise-grade” is increased operational simplicity, making it as simple as possible for small platforms to manage huge application suites. For enterprises, that translates to the ability to spend more engineering resources on feature development and creating business value and less on infrastructure management.


Sacrificing Security for Speed: 5 Mistakes Businesses Make in Application Development

application development
Data tends to be the most important and valuable aspect of modern web applications. Poor application design and architecture leads to data and security breaches. Application development teams generally assume that by providing the right authentication and authorization measures to the application, data will be protected. This is a misconception. Right measures to provide data security involve focussing on data integrity, fine grained data access and encrypting data while in rest as well as in motion. In addition, data security needs to be looked at holistically from the time the request is made to the time response is sent back across all layers of the application runtime. Today’s modern web applications are highly sophisticated and built with a big focus on simplistic user experience combined with high scalability. This combination can be challenging for application development teams from a security perspective. Most development teams focus only on silos when securing the application.


Google vs. Oracle: The next chapter


So, what next? Gesmer speculates: "We will have to see what the parties have to say on this issue when they file their briefs in August. However, a decision based on a narrow procedural ground such as the standard of review is likely to be attractive to the Supreme Court. It allows it to avoid the mystifying complexities of copyright law as applied to computer software technology. It allows the Court to avoid revisiting the law of copyright fair use, a doctrine the Court has not addressed in-depth in the 26 years since it decided Campbell v. Acuff-Rose Music, Inc. It enables it to decide the case on a narrow standard-of-review issue and hold that a jury verdict on fair use should not be reviewed de novo on appeal, at least where the jury has issued a general verdict." In other words, Oracle will lose and Google will win… for now. We still won't have an answer on the legal question that programmers want to know: What extent, if any, does copyright cover APIs? For an answer to that my friends, we may have to await the results of yet another Oracle vs. Google lawsuit. It may be wiser for Oracle to finally leave this issue alone. As Charles Duan, the director of Technology and Innovation Policy at the R Street Institute, a Washington DC non-profit think tank and Google ally, recently argued: Oracle itself is guilty of copying Amazon's S3 APIs.


2020 State of Testing Report

It is very difficult for us to define exactly what we "see" in the report. The best description for it might be a "feeling" of change, maybe even of evolution. We are seeing many indications reinforcing the increasing collaboration of test and dev, showing how the lines between our teams are getting blurrier with time. We are also seeing how the responsibility of testers is expanding, and the additional tasks that are being required from us in different areas of the team's tasks and challenges. ... I feel it makes testers critically think about the automation strategy they can have that best suits their context and make it reliable and meaningful. The flip side of it that I see sometimes is that if their automation strategy is not smart enough (or say if their CI/CD infrastructure is lame) testers end up just writing more automation and spending enormous time in just maintaining it for the sake of keeping the pipeline green. These efforts hardly contribute to the user-facing quality of the product and add no meaningful value.



Quote for the day:


"To have long term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley


Daily Tech Digest - May 11, 2020

How to choose a cloud IoT platform

How to choose a cloud IoT platform
“The internet” is not an endpoint, of course, but an interconnected collection of networks that transmit data. For IoT, the remote endpoints are often located in a cloud server rather than in a single server inside a private data center. Deploying in a cloud isn’t absolutely necessary if all you’re doing is measuring soil moisture at a bunch of locations, but it can be very useful. Suppose that the sensors measure not only soil moisture, but also soil temperature, air temperature, and air humidity. Suppose that the server takes data from thousands of sensors and also reads a forecast feed from the weather service. Running the server in a cloud allows you to pipe all that data into cloud storage and use it to drive a machine learning prediction for the optimum water flow to use. That model could be as sophisticated and scalable as you want. In addition, running in the cloud offers economies. If the sensor reports come in once every hour, the server doesn’t need to be active for the rest of the hour. In a “serverless” cloud configuration, the incoming data will cause a function to spin up to store the data, and then release its resources. Another function will activate after a delay to aggregate and process the new data, and change the irrigation water flow set point as needed.



How to create a ransomware incident response plan


Companies should test an incident response plan -- ideally, before an incident, as well as on a regular basis -- to ensure it accomplishes its intended results. Using a tabletop exercise focused on testing the response to a ransomware incident, participants can use existing tools to test their effectiveness and determine if additional tools are necessary. Companies may want to have annual, quarterly or even monthly exercises to test the plan and prepare the business. These tests should involve all the relevant parties, including IT staff, management, the communications team, and the public relations (PR) and legal teams. Enterprises should also document which of their security tools have ransomware prevention, blocking or recovery functionality. Additional tests should be conducted to verify simulated systems infected with ransomware can be restored using a backup in a known-good state. While some systems save only the most recent version of a file or a limited number of versions, testing to restore the data, system or access to all critical systems is a good idea.


Report: Chinese-linked hacking group has been infiltrating APAC governments for years

naikon-map-image.jpg
Check Point has found three versions of the attack— infected RTF files, archive files containing a malicious DLL, and a direct executable loader. All three worm their way into a computer's startup folder, download additional malware from a command and control server, and go to work harvesting information. The report concludes that Naikon APT has been anything but inactive in the five years since it was discovered. "By utilizing new server infrastructure, ever-changing loader variants, in-memory fileless loading, as well as a new backdoor — the Naikon APT group was able to prevent analysts from tracing their activity back to them," Check Point said in its report. While the attack may not appear to be targeting governments outside the APAC region, examples like these should serve as warnings to other governments and private organizations worried about cybersecurity threats.  One of the reasons Naikon APT has been able to spread so far is because it leverages stolen email addresses to make senders seem legitimate. Every organization, no matter the size, should have good email filters in place, and should train employees to recognize the signs of phishing and other email-based attacks.


Patterns for Managing Source Code Branches


With distributed version control systems like git, this means we also get additional branches whenever we further clone a repository. If Scarlett clones her local repository to put on her laptop for her train home, she's created a third master branch. The same effect occurs with forking in github - each forked repository has its own extra set of branches. This terminological confusion gets worse when we run into different version control systems as they all have their own definitions of what constitutes a branch. A branch in Mercurial is quite different to a branch in git, which is closer to Mercurial's bookmark. Mercurial can also branch with unnamed heads and Mercurial folks often branch by cloning repositories. All of this terminological confusion leads some to avoid the term. A more generic term that's useful here is codeline. I define a codeline as a particular sequence of versions of the code base. It can end in a tag, be a branch, or be lost in git's reflog. You'll notice an intense similarity between my definitions of branch and codeline. Codeline is in many ways the more useful term, and I do use it, but it's not as widely used in practice.


Image and object recognition
The recognition pattern is notable in that it was primarily the attempts to solve image recognition challenges that brought about heightened interest in deep learning approaches to AI, and helped to kick off this latest wave of AI investment and interest. The recognition pattern however is broader than just image recognition In fact, we can use machine learning to recognize and understand images, sound, handwriting, items, face, and gestures. The objective of this pattern is to have machines recognize and understand unstructured data. This pattern of AI is such a huge component of AI solutions because of its wide variety of applications. The difference between structured and unstructured data is that structured data is already labelled and easy to interpret. However unstructured data is where most entities struggle. Up to 90% of an organization's data is unstructured data. It becomes necessary for businesses to be able to understand and interpret this data and that's where AI steps in. Whereas we can use existing query technology and informatics systems to gather analytic value from structured data, it is almost impossible to use those approaches with unstructured data.


Al Baraka Bank Sudan transforms into an intelligent bank with iMAL*BI


The solution comprises comprehensive data marts equipped with standard facts and dimensions as well as progressive measures that empower the bank’s workforce to build ad-hoc dashboards, in-memory, to portray graphical representations of their data queries. It is rich in out-of-the-box dashboards, covering financial accounting, retail banking, corporate banking, investments, trade finance, limits, in addition to C-level executives’ analytics boosting a base set of KPIs, dashboards and advanced analytics which are essential to each executive with highly visual, interactive and collaborative dashboards backed by centralised metadata security. This strategic platform empowers bankers to make smarter, faster, and more effective decisions, improving operational efficiency. It also enables business agility while driving innovation, competitive differentiation, and profitable growth. The implementation covered the establishment of a comprehensive end-to-end data warehousing solution, an automated ETL process and a progressive data model.


The new cybersecurity resilience


While security teams and experts might have differing metrics for gauging resiliency, they tend to agree on the overarching need and many of the best practices to achieve it. “Resiliency is viewed by some to be the latest buzzword replacing continuity or recovery, but to me it really means placing the appropriate people, processes, and procedures in place to ensure you’re limiting the need for enacting a continuity or recovery plan,” says Shared Assessments Vice President and CISO Tom Garrubba. Resilient organizations share numerous traits. According to Accenture they place a premium on collaboration – 79 percent say collaboration will be key to battling cyberattacks and 57 percent collaborate with partners to test resilience. “By adopting a realistic, broad-based, collaborative approach to cybersecurity and resilience, government departments, regulators, senior business managers and information security professionals will be better able to understand the true nature of cyber threats and respond quickly, and appropriately,” says Steve Durbin, managing director at the Information Security Forum (ISF).


Microsoft and Intel project converts malware into images before analyzing it

stamina-steps.png
The Intel and Microsoft team said that resizing the raw image did not "negatively impact the classification result," and this was a necessary step so that the computational resources won't have to work with images consisting of billions of pixels, which would most likely slow down processing. The resides images were then fed into a pre-trained deep neural network (DNN) that scanned the image (2D representation of the malware strain) and classified it as clean or infected. Microsoft says it provided a sample of 2.2 million infected PE (Portable Executable) file hashes to serve as a base for the research. Researchers used 60% of the known malware samples to train the original DNN algorithm, 20% of the files to validate the DNN, and the other 20% for the actual testing process. The research team said STAMINA achieved an accuracy of 99.07% in identifying and classifying malware samples, with a false positives rate of 2.58%. "The results certainly encourage the use of deep transfer learning for the purpose of malware classification," said Jugal Parikh and Marc Marino


Prepare for the future of distributed cloud computing

Prepare for the future of distributed cloud computing
Enterprises need to support edge-based computing systems, including IoT and other specialized processing that have to occur near the data source. This means that while we spent the past several years centralizing processing storage in public clouds, now we’re finding reasons to place some cloud-connected applications and data sources near to where they can be most effective, all while still maintaining tight coupling with a public cloud  provider. Companies need to incorporate traditional systems in public clouds without physical migration. If you consider the role of connected systems, such as AWS’s Outpost or Microsoft’s Azure Stack, these are really efforts to get enterprises to move to public cloud platforms without actually running physically in a public cloud. Other approaches include containers and Kubernetes that run locally and within the cloud, leveraging new types of technologies, such as Kubernetes federation. The trick is that most enterprises are ill-equipped to deal with distribution of cloud services, let alone move a critical mass of applications and data to the cloud.


Source Generators Will Enable Compile-time Metaprogramming in C# 9

Loosely inspired by F# type providers, C# source generators respond to the same aim of enabling metaprogramming but in a completely different way. Indeed, while F# type providers emit types, properties, and methods in-memory, source generators emit C# code back into the compilation process. Source generators cannot modify existing code, only add new code to the compilation. Another limitation of source generators is they cannot be applied to code emitted by other source generators. This ensures each code generator will see the same compilation input regardless of the order of their application. Interestingly, source generators are not limited to inspecting source code and its associated metadata, but they may access additional files. Specifically, source generators are not designed to be used as code rewriting tools, such as optimizers or code injectors, nor are they meant to be used to create new language features, although this would be technically feasible to some limited extent.



Quote for the day:


"You’ve got to get up every morning with determination if you’re going to go to bed with satisfaction." -- George Lorimer


Daily Tech Digest - May 10, 2020

Opinion: Responsible AI starts with higher education


“This new algorithm will need a lot of pictures of people. What if we use a morgue so we don’t have to worry about consent?” Although this is a fictitious example, modern-day tech workers often face similar questions. Why? Because the rise of artificial intelligence based on machine learning has created a new class of sociotechnical challenges. Now is the time for industry and universities to acknowledge these new challenges and step up to meet them. Since the beginning of the technology industry, educational institutions, legislatures, companies, and developers have worked to improve the quality of products and services. The resulting curricula, laws, corporate policies, standards, and development approaches have provided frameworks for engineers and product managers. Emerging technologies require the development of new frameworks. In the early 2000s, industry had to get serious about computer security. Today, we have a new challenge: How do you turn the goal of responsible AI into code?



How to help data scientists adapt to business culture


Businesses themselves don't understand what the data science discipline is, the work backgrounds from where data scientists are coming, and what it's going to take to acculturate these highly trained data engineers to how a business operates and what it needs. Many data scientists have lived their lives in environments funded by university grants that enabled them to pursue highly theoretical projects that are all about the quest for answers but not necessarily about finding definitive solutions for why customers seem to be suddenly favoring another brand, or why your manufactured products are suddenly experiencing more failures. Companies also struggle with integrating data scientists with their existing business and IT workforces. Often, existing business units and IT have little in common with data scientists, and there are no existing workflows that can help them learn how to optimally work together. Another issue is that businesses aren't always sure what (and when) to expect analytics and results out of their big data projects. Successful use cases exist in most industries, but companies still don't have a good feel for knowing when a data science or analytics project is moving forward and when it is stagnating.



20 ways banks can get AI right

AI
Try to create ‘segments of one’ through the collection of the volume and variety of data that can empower you to pursue automated hyper-personalisation. When clients feel that your service is sensitive and responsive to their individual preferences, they will be happy to share more and more information with you. Think about a situation where you offer the client the chance to offer money to a charity through ‘rounding prices up’. For example, the client purchases a coffee for £1.79 and you offer to have the remaining £0.21 put into a charity pot, which, after the client has collected £20, this pot can be given to a charity, which you, as a bank, will match with an equal donation. Let’s say the client is a paediatrician. In this case, the three potential charities the client can choose from should be about health, children, and medical research. Another client is a music teacher, in which case the three choices can be related to classical music, early talent, and education. These elements of hyper-personalisation have to be fully automated and ideally be propelled by some levels of AI.


Understanding the convergence of IoT and data analytics

Understanding the convergence of IoT and data analytics image
Simply collecting IoT data is not enough — “Organisations need to turn this data into value in both a batch (using traditional analytics) and real-time context. It is also not desirable, nor possible in some cases, to do all of your processing at the enterprise level (in the cloud or data centre, for example).” As is the nature of IoT devices, decisions will often need to be made in a localised fashion, including on the device itself, and these decisions will be largely driven by models derived from analytical processes and historical data. “The ability to make the edge ‘smarter’, offload compute workloads to the edge for more efficient processing, support localised or independent/disconnected processing, reduce decision latency, and reduce data transfer requirements are all benefits that may be applied to almost any vertical,” continues Petracek. “Analytics, and the operationalisation of analytical models and pipelines, presents a huge opportunity to organisations, especially given the level of real-time information and context that IoT can provide.”


2020 is about digital optimization not digital transformation


Digital optimization isn’t easy as a result of completely different groups are prone to have invested in options that don’t communicate to one another and aren’t straightforward to combine with one another. Further, every group may very well be going digital at a special tempo – these on the front-line coping with clients on daily basis are beneath extra stress than these working the group’s core operations. However, with out digital optimization, organizations will likely be unable to eliminate the silos in its processes even when groups embrace collaboration within the spirit of digital innovation. ... The vital factor to recollect when optimizing digital investments is that the group has one purpose, one mission, and one imaginative and prescient. Hence, the roadmap have to be comprised of straightforward milestones that have an effect, ideally on the enterprise-level. At the tip of the day, organizations should perceive the significance of optimizing their investments in digital, and prioritize it over spends that merely broaden their digital portfolio.


Managing Trade-offs: Prediction, Adaptability and Resilience


One critical new way of working that CEOs must “bottle” is organizational learning through local experimentation and global scaling. Lockdown has not only liberated the CEO, it has also freed local leaders from top-down governance. Often asking for forgiveness, rather than permission, they’ve innovated, disrupted and bullied their way to solutions that surmount obstacles and serve customers. In doing so, local teams have found support from the center. Some global leaders helped scale top solutions across the firm. They reimagined marketing and sales budgets overnight, showing the organization what costs are critical and what are dispensable. They solved huge supply chain issues, teaching the organization how to strengthen its operations. In order to ensure that this burst of experimentation and learning doesn’t become a historical oddity, leading CEOs will systematically protect the fundamental new relationship between global and local. They will set a clear agenda for the core business (or, as we like to call it, “Engine 1”): Continue the same pace of experimentation and learning throughout the long dance.


EY: revolutionising supply chain management with blockchain

blockchain
While in traditional supply chains production is recorded digitally, when it comes to shipping Brody explains that maintaining information continuity across systems and enterprise boundaries is a challenge, there is “oceans of digital data but only islands of useful information.” The use of systems such as electronic data interchange (EDI) and XML messaging are being utilised by these companies to try and maintain information continuity, but even these system pose their own challenges such as being out of sync and moving data only one stop down the supply chain, “The result: inventory that seems to be in two places at once,” added Brody. “These systems were created for an era of big, vertically integrated companies with large, but mostly static supply chains.” Although relevant 30 years ago, in today's modern supply chain this is not the case. ... “Until the advent of bitcoin and blockchain technology, the only way you could get a large number of entities to agree upon a shared, truthful set of data, such as who has what bank balance, was to appoint an impartial intermediary to process and account for all transactions,” highlighted Brody.


Microsoft is suddenly recommending Google products

microsoft-march-2020-patch-tuesday-fixes-5e6a4fb210393e000182bb8f-1-mar-16-2020-16-46-38-poster.jpg
Not merely extensions, but great extensions. I'm tempted to suspect a lawyer may have written that. Or at least someone in the Google marketing department. Naturally, I asked Microsoft why it had suddenly lurched from prickly to cuddly. Could it be that Google and Microsoft had a kiss-and-make-up Zoom call -- I mean, a Microsoft Teams call? Or a Google Meet encounter? Microsoft declined to comment. Perhaps, you might think, Microsoft has stopped to play nice merely because that's its brand image these days. Or perhaps some Redmonder stopped to think that, indeed, Edge doesn't currently enjoy enough of its own extensions. My delvings into Redmond's innards suggest the latter may have driven the decision even more than the former. You really don't want to annoy your customers, do you? Especially when you can't currently offer them what they need. Of course, Edge is based on Google's Chromium platform. In my own experimentations, I've found it to be a more pleasant experience than Chrome. Just that little bit more responsive and generally brighter -- though I can't quite cope with Bing as my default search engine.


Expanding Data Governance into the Future


Recognition that good Data Governance has become a must has come none too soon. Donna Burbank, Managing Director at Global Data Strategy, notes that many companies are beginning or planning to begin a Data Governance program, including a broader range of industries than before. However, spreading an existing successful Data Governance framework in one business area does not necessarily translate across the entire enterprise, or even to another company. Freddie Mac tried several times to implement DG driven by IT, and nothing stuck until a next-generation proactive and collaborative Data Governance took hold. Unfortunately, many companies, like Freddie Mac, get stuck in old patterns, trying to evangelize rigid Data Governance practices, gumming up operations, and fostering mistrust. Firms in this situation, according to Derek Steer, CEO at Mode, end up governing the wrong amount of data (missing the highest priority data assets) or enforcing Data Governance poorly (spending too much or too little time maintaining Data Governance logic). The first steps include understanding lessons from initial DG processes, how DG has changed, and how the next generation works better to support the business.


Amazon Faces A New Opponent: Some Of Its Own Tech Employees

U.S. employees of Amazon, its supermarket subsidiary Whole Foods and supermarket delivery services were called to strike on May 1, taking advantage of May 1 to denounce employers accused of not sufficiently protecting them in the face of the pandemic. (Photo by VALERIE MACON / AFP) (Photo by VALERIE MACON/AFP via Getty Images)
Tech employees are speaking out for their blue-collar counterparts partly because the warehouse workers asked them to. Costa, who had been at the company for 15 years before she was fired, says warehouse workers reached out in March to the Amazon Employees for Climate Justice (AECJ), an internal group she co-founded two years ago, for help and support during the pandemic. “Tech workers are ‘a valued resource,’” Costa says. “They [Amazon management] see us as less expendable than warehouse workers because they know they can’t just throw more bodies at our seats if we leave. We have more leverage, and that’s why tech workers have much more privilege and have that much more responsibility to speak out.” AECJ organized a one-hour video call in mid-April during which warehouse workers could speak to Amazon tech employees who were interested to hear from them directly. The invite was sent out via Amazon’s internal e-mail system on Friday, April 10. “It got 1,550 accepts on a Friday afternoon, when New York, Europe and India were already off the clock,” Costa said.



Quote for the day:


"Leadership without character is unthinkable - or should be." -- Warren Bennis