Daily Tech Digest - September 27, 2020

Programming Fairness in Algorithms

Machine learning fairness is a young subfield of machine learning that has been growing in popularity over the last few years in response to the rapid integration of machine learning into social realms. Computer scientists, unlike doctors, are not necessarily trained to consider the ethical implications of their actions. It is only relatively recently (one could argue since the advent of social media) that the designs or inventions of computer scientists were able to take on an ethical dimension. This is demonstrated in the fact that most computer science journals do not require ethical statements or considerations for submitted manuscripts. If you take an image database full of millions of images of real people, this can without a doubt have ethical implications. By virtue of physical distance and the size of the dataset, computer scientists are so far removed from the data subjects that the implications on any one individual may be perceived as negligible and thus disregarded. In contrast, if a sociologist or psychologist performs a test on a small group of individuals, an entire ethical review board is set up to review and approve the experiment to ensure it does not transgress across any ethical boundaries.


This Algorithm Doesn't Replace Doctors—It Makes Them Better

Operators of paint shops, warehouses, and call centers have reached the same conclusion. Rather than replace humans, they employ machines alongside people, to make them more efficient. The reasons stem not just from sentimentality but because many everyday tasks are too complex for existing technology to handle alone. With that in mind, the dermatology researchers tested three ways doctors could get help from an image analysis algorithm that outperformed humans at diagnosing skin lesions. They trained the system with thousands of images of seven types of skin lesion labeled by dermatologists, including malignant melanomas and benign moles. One design for putting that algorithm’s power into a doctor’s hands showed a list of diagnoses ranked by probability when the doctor examined a new image of a skin lesion. Another displayed only a probability that the lesion was malignant, closer to the vision of a system that might replace a doctor. A third retrieved previously diagnosed images that the algorithm judged to be similar, to provide the doctor some reference points.


Redefining Leadership In The Age Of Artificial Intelligence

Intelligent behaviour has long been considered a uniquely human attribute. But when computer science and IT networks started evolving, artificial intelligence and people who stood by them were on the spotlight. AI in today’s world is both developing and under control. Without a transformation here, AI will never fully deliver the problems and dilemmas of business only with data and algorithms. Wise leaders do not only create and capture vital economic values, rather build a more sustainable and legitimate organisation. Leaders in AI sectors have eyes to see AI decisions and ears to hear employees perspective. A futuristic AI leader plans to work not just for now but also for the years ahead. A company’s development in AI involves automating business processes using robotic technologies, gaining insight through data analysis and enhancement, cost-effective predictions based on algorithms and engagement with employees through natural language processing chatbots, intelligent agents and machine learning. Without a far-sighted leader, bringing all this to reality will be merely impossible.


Blockchain: En Route to the Global Supply Chain

In the context of a large-scale shipping operation, for instance, there may be thousands of containers filled with millions of packages or assets. Using a system that can track every asset with full certainty, any concerns can be eliminated about whether the items are where they are supposed to be, or if anything is missing. As blockchain expands, so too will the data it records, which in turn increases trust. By ensuring via this secured digital ledger that an asset has moved from a warehouse to a lorry on a Thursday afternoon, more data can then be added. For example, it can show that the asset moved from a specific shelf in a warehouse on a specific street and was moved by a specific truck operated by a specific driver. Securing the location data with full trust provides assurance that things are happening correctly and means that financial transactions can be made with more confidence. Layering mapping capabilities and rich location data to a blockchain record also enables fraud detection. Without blockchain, it cannot be certain that the delivery updates provided are in fact accurate. Blockchain makes transactions transparent and decentralised, enabling the possibility to automatically verify their accuracy by matching the real location of an item with the location report from a logistics company.


A closer look at Microsoft Azure Arc

At Ignite, Microsoft provided its answer on how Azure Arc brings cloud control on premises. The cornerstone of Azure Arc is Azure Resource Manager, the nerve center that is used for creating, updating, and deleting resources in your Azure account. That encompasses allocating compute and storage to specific workloads and then monitoring performance, policy compliance, updates and patches, security status, and so on. You can also fire up and access Azure Resource Manager through several paths ranging from the Azure Portal to APIs or command line interface (CLI). It provides a single pane of glass for indicating when specific servers are out of compliance; specific VMs are insecure; or certificates or specific patches are out of date – and it can then show recommended remedial actions for IT and development teams to take. While it requires at least some connection to the Azure Public Cloud, it can run offline when the network drops. Microsoft has built a lot of flexibility as to the environments that Azure Arc governs. It can be used for controlling bare metal environments as well as virtual machines running on any private or public cloud, SQL Server, or Kubernetes (K8s) clusters.


Why haven’t we ‘solved’ cybersecurity?

Cybersecurity-related incentives are misaligned and often perverse. If you had a real chance to become a millionaire or even a billionaire by ignoring security and a much smaller chance if you slowly baked in security, which path would you choose? We also fail to account for, and sometimes flat out ignore, the unintended consequences and harmful effects of the innovative technology and ideas we create. Who would have thought that a 2003 social media app, built in a dorm room, would later help topple governments and make the creator one of the richest people in the world? Cybersecurity companies and individual experts face the difficult challenge of balancing personal gain versus the greater good. If you develop a new offensive tool or discover a new vulnerability, should you keep it secret or make a name for yourself through disclosure? Concerns over liability and competitive advantage inhibit the sharing of best practices and threat information that could benefit the larger business ecosystem. Data has become the coin of the realm in the modern age. Data collection is central to many business models, from mature multi-national companies to new start-ups. Have a data blind spot?


Top Technologies To Achieve Security And Privacy Of Sensitive Data In AI Models

Differential privacy is a technique for sharing knowledge or analytics about a dataset by drawing the patterns of groups within the dataset and at the same time reserving sensitive information concerning individuals in the dataset. The concept behind differential privacy is that if the effect of producing an arbitrary single change in the database is small enough, the query result cannot be utilised to infer much about any single person, and hence provides privacy. Another way to explain differential privacy is that it is a constraint on the algorithms applied to distribute aggregate information on a statistical database, which restricts the exposure of individual information of database entries. Fundamentally, differential privacy works by adding enough random noise to data so that there are mathematical guarantees of individuals’ protection from reidentification. This helps in generating the results of data analysis which are the same whether or not a particular individual is included in the data. Facebook has utilised the technique to protect sensitive data it made available to researchers analysing the effect of sharing misinformation on elections. Uber employs differential privacy to detect statistical trends in its user base without exposing personal information.


Getting Started with Mesh Shaders in Diligent Engine

Originally, hardware was only capable of performing a fixed set of operations on input vertices. An application was only able to set different transformation matrices (such as world, camera, projection, etc.) and instruct the hardware how to transform input vertices with these matrices. This was very limiting in what an application could do with vertices, so to generalize the stage, vertex shaders were introduced. Vertex shaders were a huge improvement over fixed-function vertex transform stage because now developers were free to implement any vertex process algorithm. There was however a big limitation - a vertex shader takes exactly one vertex as input and produces exactly one vertex as output. Implementing more complex algorithms that would require processing entire primitives or generate them entirely on the GPU was not possible. This is where geometry shaders were introduced, which was an optional stage after the vertex shaders. Geometry shader takes the whole primitive as an input and may output zero, one or more primitives. 


Need for data management frameworks opens channel opportunities

Today's huge influx of data is resulting in multiple inefficiencies, according to Mike Sprunger, senior manager of cloud and network security at Insight Enterprises, a global technology solution provider. He cited the example of an employee who generates a spreadsheet and shares it with half a dozen co-workers, who then send the spreadsheet to half a dozen others. The 1 MB file morphs into 36 MBs, and when that information is backed up, data volumes double again. As cloud and flash technologies lowered storage pricing dramatically, many companies simply added more storage capacity as data demands grew. While companies stored more, they purged less. Furthermore, industry and government rules and guidelines for maintaining data have been evolving. So it can be unclear how to meet regulatory requirements, Sprunger noted, and decide what data can go and what must be kept. Compounding the challenge, communication between IT and business units is often mediocre or nonexistent. So neither group understands the business requirements or the technical possibilities of deleting outdated data, he added.


RASP 101: Staying Safe With Runtime Application Self-Protection

Feiman says solutions like RASP and WAF have emerged from "desperation" to protect application data but are insufficient. The market needs a technology that is focused on detection rather than prevention. Indeed, in an effort to address the problems with RASP, he and his team at WhiteHat are in the process of beta testing an application security technology that performs app testing without instrumentation. As far as existing RASP technologies go, it's unlikely they'll stick around in their current form. Rather than an independent technology, Feiman believes RASP will ultimately get absorbed into application runtime platforms like the Amazon AWS and Microsoft Azure cloud platforms. This could happen through a combination of acquisitions and companies like AWS building their own lightweight RASP capabilities into their technologies. "The idea will stay, the market hardly will," says Feiman. On that, Sqreen's Aviat disagrees, saying RASP is "indeed a standalone technology." "I expect RASP to become a crucial element of any application security strategy, just like WAF or SCA is today – in fact, RASP is already referenced by NIST as critical to lowering your application security risk," he said.



Quote for the day:

"The leader has to be practical and a realist, yet must talk the language of the visionary and the idealist." -- Eric Hoffer

Daily Tech Digest - September 26, 2020

Steering Wealth Management Industry Through Digital Transformation In The Post Pandemic World

Implement ready to use digital solutions and change internal processes instead of starting from scratch to build solutions to cater to its processes. Don’t shy from exploring global solutions, you would most likely get a great product which may not be expensive. Insist on following the Methodology of “Pay as you Use or Pay as you Grow” instead of incurring significant implementation charges and license fees.  Explore working with StartUps who are hungry for businesses and will go out of the way to build great solutions. A robust database for sending relevant, targeted and personalized communications  Make a beginning and take baby steps. Focus on 90% of your requirements. Lot of time and energy is spent on addressing 10% of requirements which can be done manually or there could be a work around We are at the cusp of a brave, new world that demands self-sufficiency, and it is becoming rapidly clear that greater digital freedom will play a pivotal role in making the Industry more effective, scalable and enduring on this uncharted road ahead. Firms that deploy these tools fast will attract clients and survive. The Industry has always been one to shy away from digital transformation.


Layered security becomes critical as malware attacks rise

The scam script Trojan.Gnaeus made its debut at the top of WatchGuard’s top 10 malware list for Q2, making up nearly one in five malware detections. Gnaeus malware allows threat actors to hijack control of the victim’s browser with obfuscated code, and forcefully redirect away from their intended web destinations to domains under the attacker’s control. Another popup-style JavaScript attack, J.S. PopUnder, was one of the most widespread malware variants last quarter. In this case, an obfuscated script scans a victim’s system properties and blocks debugging attempts as an anti-detection tactic. To combat these threats, organizations should prevent users from loading a browser extension from an unknown source, keep browsers up to date with the latest patches, use reputable adblockers and maintain an updated anti-malware engine. XML-Trojan.Abracadabra is a new addition to the top 10 malware detections list, showing a rapid growth in popularity since the technique emerged in April. Abracadabra is a malware variant delivered as an encrypted Excel file with the password “VelvetSweatshop”, the default password for Excel documents.


Want diversity? Move beyond your closed network

In earnest, the difficulty of recruiting diverse candidates reflects the fact that the networks the banking industry typically relies upon to attract and recruit talent do not reach diverse pools of talented candidates. This network gap is insidious too, leading to a lack of diversity in other aspects of business, like vendor procurement and investment. Once, Mitt Romney spoke of “binders full of women” when running for president. While his wording was inartful, he seemed to recognize that he needed to make a deliberate effort to build his network of talented women in order to be able to appoint numbers of qualified women. So, what deliberate steps can banks take to close the network gap and find talented people of color? Here are a few things any bank can do to turn intention into impact, and close the network gap. Begin with reflection: Why are you not tied to diverse networks? Do you know where to find black and brown civil society? Learning why your company may not be a cultural fit for certain demographics is nothing new for banks. Gender is probably the most recent example. Understanding that women bring different and needed experience to leadership creates an impetus for more diversity.


Why No One Understands Enterprise Architecture & Why Technology Abstractions Always Fail

The first step is demystification. All of the abstract terms – even the word “architecture” – should be modified or replaced with words and phrases that everyone – especially non-technology executives – can understand. Enterprise planning or Enterprise Business- Technology Strategy might be better, or even just Business-Technology Strategy (BTS). Why? Because “Enterprise Architecture” is nothing more than an alignment exercise, alignment between what the business wants to do and how the technologists will enable it now and several years out. It’s continuous because business requirements constantly change. At the end of the day, EA is both a converter and a bridge: a converter of strategy and a bridge to technology. The middle ground is the Business-Technology Strategy. EA – or should I say “Business Technology Strategy” – isn’t strategy’s first cousin, it’s the offspring. EA only makes sense when it’s derived from a coherent business strategy. For technology companies, that is, companies that sell technology-based products and services – the role of EA is easier to define. Who doesn’t want to help technology (AKA “engineering”) – the ones who build the products and services – build the right applications with the right data on the right infrastructure?


Types of Apps that can be built with Angular Framework

Undoubtedly, Angular development is almost everywhere after it was released in 2009. A few years back, Angular development services are on great boom. Angular is considered the best framework for developing web, single-page, and mobile applications. The Angular framework has impressive features; the developers and enterprise website owners pretty much like it. Even most of the developers shifted their technology to angular. Before knowing why angular mobile app development and what sort of applications can be developed using an angular framework, let’s first dive into the topic of what exactly Angular framework is? Angular is a JavaScript-based framework from the family of Google. The angular framework was developed by Google’s developers to create dynamic web applications. Angular is a full-fledged framework used for the frontend development of an application. Angular has a lot to give to your web and mobile application. Angular will not only create an impressive UI for your application but also provide features like high performance and user-friendly. As a feature-rich framework, Angular provides a vast number of features for web application developers.


WebAssembly Could Be the Key for Cloud Native Extensibility

Google had been championing the idea of making WebAssembly a common runtime for Envoy, as a way to help its own Istio service mesh, of which Envoy is a major component. WASM is faster than JavaScript and, because it runs in a sandbox (a virtual machine), it is secure and portable. Perhaps best of all, because it is very difficult to write assembly-like WASM code, many parties created translators for other languages — allowing developers to use their favored languages such as C and C++, Python, Go, Rust, Java, and PHP. Google and the Envoy community also rallied around building a WebAssembly System Interface (WASI), which serves as the translation layer between the WASM and the Envoy filter chain. Still, the experience of building Envoy modules wasn’t packaged for developers, Levine thought at the time. There was still a lot of plumbing to add, settings for Istio and the like. ““Google is really good at making infrastructure tooling. But I’d argue they’re not the best at making their user experience,” Levine said. And much like Docker customized the Linux LXC — pioneered in large part by Google — to open container technology to more developers, so too could the same be done with WASM/WASI for Envoy, Levine argues.


Amazon's robot drone flying inside our homes seems like a bad idea

Amazon says you can specify a flight path, map your house, locate points of interest, and generally instruct the eye of Skynet where to fly. Cyberdyne, uh, Amazon also says the device has built in obstacle avoidance. Let's think about that for a minute. Will the device be able to avoid hanging lamps or plants? What about objects high up on shelves? Will it be able to stand back when a sleep-addled adult gets up in the middle of the night to do middle of the night business? Why would it be out and about at that time anyway? And what about the downdraft? How close can it fly to bookshelves and knickknacks without air-blasting them to the ground? How much will it freak out your pets? My spouse? Your spouse? Just how creepy would it be for it to hover over the kids beds because you're too lazy to get off the couch to see if they're asleep? Every rational fiber of my being tells me this is wrong on every level. ... The Always Home Cam is primarily meant as a remote security cam. If you're out and you get an alert from a Ring doorbell or other security device (I wonder if this will work with other trigger devices), you can virtually fly around your house and see what's happening.


Project InnerEye open-source deep learning toolkit: Democratizing medical imaging AI

Project InnerEye has been working closely with the University of Cambridge and Cambridge University Hospitals NHS Foundation Trust to make progress on this problem through a deep research collaboration. Dr. Raj Jena, Group Leader in machine learning and radiomics in radiotherapy at the University of Cambridge, explains, “The strongest testament to the success of the technology comes in the level of engagement with InnerEye from my busy clinical colleagues. For over 15 years, the promise of automated segmentation of images for radiotherapy planning has remained unfulfilled. With the InnerEye ML model we have trained on our data, we now observe consistent segmentation performance to a standard that matches our stringent clinical requirements for accuracy.” The goal of Project InnerEye is to democratize AI for medical image analysis and empower developers at research institutes, hospitals, life science organizations, and healthcare providers to build their own medical imaging AI models using Microsoft Azure. So to make our research as accessible as possible, we are releasing the InnerEye Deep Learning Toolkit as open-source software.


How to Strengthen the Pillars of Data Analytics for Better Results

Data analysts and business analysts rely heavily on a fit-for-purpose data environment that enables them to do their jobs well. These environments allow them to answer questions from management and different parts of the business. These same professionals have expertise in working and communicating with data but often do not have deep technical knowledge of databases and the underlying infrastructure. For instance, they may be familiar with SQL and bringing together data sources in a simple data model that allows them to dig deeper in their analysis, but when the database performance degrades during more complex analysis, the depth of infrastructure reliance becomes clear. The dreaded spinner wheel or delays in analysis make it difficult to meet business needs and demands. This can impact critical decision making and reveal underlying weaknesses that get in the way of other data applications, such as artificial intelligence (AI). These indicators of poor performance also show the need for scaling the data environment to accommodate the growth of data and data sources.


The Role of Data Management in Advancing Biology

I think FAIR has really codified a way of thinking about data that's incredibly aspirational and resonates with people. One of the biggest challenges we're facing in this field right now is findability of the data—search is a hard problem. Then let's say you manage to find some data that you're very interested in; a lot of the time it's not clear whether or not those data are accessible to you or to the public. There's been a large push over the last decade to make everything reproducible, to make the data accessible, to have a data management plan. A lot of that effort isn't necessarily resourced, so just because you have a data management plan doesn't mean that you have a clear place where you can actually put data. We're lucky that the Sequence Read Archives exist and that the NIH continues to fund it, because that's become one of these major focal points for collecting the data. But even more than that, when you're in the middle of collecting data for a very specific question, you're not necessarily thinking about what other information to collect to make these data useful to other groups or other labs. That's not a part of the thought experiment that you're going through in that moment.



Quote for the day:

"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks

Daily Tech Digest - September 25, 2020

Polish police shut down hacker super-group 

According to reports in Polish media, the hackers have been under investigation since May 2019, when they sent a first bomb threat to a school in the town of Łęczyca. Investigators said that an individual named Lukasz K. found the hackers on internet forums and hired them to send a bomb threat to the local school, but make the email look like it came from a rival business partner. The man whose identity was spoofed in the email was arrested and spent two days in prison before police figured out what happened. ... Investigators said that when the hackers realized what was happening, they then hacked a Polish mobile operator and generated invoices for thousands of zlotys (the Polish currency) in the name of both the detective and the framed businessman. ... Investigators said that from infected users, the hackers would steal personal details, which they'd use to steal money from banks with weak security. In case some banks had implemented multiple authentication mechanisms, the group would then use the information they stole from infected victims to order fake IDs from the dark web, and then use the IDs to trick mobile operators into transferring the victim's account to a new SIM card.


All the Way from Information Theory to Log Loss in Machine Learning

In 1948, Claude Shannon introduced the information theory in his 55-page-long paper called “A Mathematical Theory of Communication”. The information theory is where we start the discussion that will lead us to the log loss which is a widely-used cost function in machine learning and deep learning models. The goal of the information theory is to efficiently deliver messages from a sender to a receiver. In the digital age, the information is represented by bits, 0 and 1. According to Shannon, one bit of information sent to the recipient means to reduce the uncertainty of the recipient by a factor of two. Thus, information is proportional to the uncertainty reduction. Consider the case of flipping a fair coin. The probability of heads being the side facing up, P(Heads), is 0.5. After you (the recipient) are told that the heads is up, P(Heads) becomes 1. Thus, 1 bit of information is sent to you and the uncertainty is reduced by a factor of two. The amount of information we get is the reduction in uncertainty which is the inverse of the probability of events. The number of bits of information can easily be calculated by taking log (base2) of the reduction in uncertainty.


From adoption to understanding: AI in cyber security beyond Covid-19

Businesses have begun to recognise the promise of AI / ML, and as cyber attacks continue to increase globally, more are adopting these advanced tools to protect themselves. In a survey we conducted among IT decision-makers across the United States and Japan back in 2017, we discovered 74% of businesses in both regions were already using some form of AI or ML to protect their organisations from cyber threats. In our most recent report published this year, we took the pulse of 800 IT professionals with cyber security decision-making power across the US, UK, Japan, Australia and New Zealand. In the process, we discovered that 96% of respondents now use AI/ML tools in their cyber security programs – a significant increase from three years ago! But we weren’t expecting to uncover a pervasive lack of awareness around the benefits of these technologies. Despite the increase in adoption rates for these technologies, our most recent survey found that more than half of IT decision-makers admitted they do not fully understand the benefits of these tools. Even more jarring was that 74% of IT decision-makers worldwide don’t care whether they’re using AI or ML, as long as the tools they use are effective in preventing attacks.


COVID-19 widens the digital innovation gap

"Our findings point to an overconfidence on the part of business leaders that their CMS has the necessary functions to support omnichannel and content orchestration, while builders say they feel disempowered and frustrated." One telling stat the study found is that only 34% of content creators said they can control all the content across digital channels without developer assistance, while 74% of digital leaders think their CMS enables this, Contentful said. Additionally, two-thirds of business leaders believe they are behind competitors in delivering new digital experiences, the company said. "They struggle with maintaining content and brand consistency across channels, hiring qualified talent, juggling multiple systems, and managing a mountain of existing content while simultaneously building more, more, more." Eighty-three percent of respondents believe customers expect an omnichannel digital experience and 88% think brand consistency across these experiences is important, the study said. "This aligns with industry research that shows consistent, connected digital experiences are important throughout the customer lifecycle."


Set up continuous integration for .NET Core with OpenShift Pipelines

Have you ever wanted to set up continuous integration (CI) for .NET Core in a cloud-native way, but you didn’t know where to start? This article provides an overview, examples, and suggestions for developers who want to get started setting up a functioning cloud-native CI system for .NET Core. We will use the new Red Hat OpenShift Pipelines feature to implement .NET Core CI. OpenShift Pipelines are based on the open source Tekton project. OpenShift Pipelines provide a cloud-native way to define a pipeline to build, test, deploy, and roll out your applications in a continuous integration workflow. ... You will need cluster-administrator access to an OpenShift instance to be able to access the example application and follow all of the steps described in this article. If you don’t have access to an OpenShift instance, or if you don’t have cluster-admin privileges, you can run an OpenShift instance locally on your machine using Red Hat CodeReady Containers. Running OpenShift locally should be as easy as crc setup followed by crc start. Also, be sure to install the oc tool; we will use it throughout the examples.


Kubernetes Operators in Depth

There's lots of reasons to build an operator from scratch. Typically it's either a development team who are creating a first-party operator for their product, or a devops team looking to automate the management of 3rd party software. Either way, the development process starts with identifying what cases the operator should manage. At their most basic operators handle deployment. Creating a database in response to an API resource could be as simple as kubectl apply. But this is little better than the built-in Kubernetes resources such as StatefulSets or Deployments. Where operators begin to provide value is with more complex operations. What if you wanted to scale your database? With a StatefulSet you could perform kubectl scale statefulset my-db --replicas 3, and you would get three instances. But what if those instances require different configuration? Do you need to specify one instance to be the primary, and the others replicas? What if there are setup steps needed before adding a new replica? In this case an operator can configure these settings with an understanding of the specific application.



How to Become a Cyber Security Engineer?

Once you’ll get done with all these required skills, now it’s time to do the practical implementation and gain some hands-on experience in this particular field. You can opt for several internships or training programs to get the opportunities of working on live projects real-time environment. Furthermore, you can apply for some entry-level jobs as well in the Cyber Security domain such as Cyber Security Analyst, Network Analyst, etc. to gain the utmost exposure. Meanwhile, this professional experience will not only allow you to understand the core functioning of the Cyber Security field such as the design & implementation of secure network systems, monitoring, and troubleshooting, risk management, etc. but is also crucial for building a successful career as a Cyber Security Engineer as almost every company requires a professional experience of around 2-3 years while hiring for the Cyber Security Engineers. ... Here comes one of the most prominent parts of this journey – Certifications!! Now, there is a question that often arises in the minds of individuals that if a person is having an appropriate skill set along with the required experience then why would he need to go for such certifications?


Microsoft announces cloud innovation to simplify security, compliance, and identity

Our compliance cloud solutions help customers more easily navigate today’s biggest risks, from managing data or finding insider threats to dealing with legal issues or even addressing standards and regulations. We’ve listened to customers and invested heavily in a set of solutions to help them modernize and keep pace with the evolving and complex compliance and risk management challenges they face. One of our key investment areas is the set of Data Loss Prevention products in Microsoft 365. We recently announced the public preview of Microsoft Endpoint Data Loss Prevention (DLP), which means customers can now identify and protect data on devices. Today, we are announcing the public preview of integration between Microsoft Cloud App Security and Microsoft Information Protection, which extends Microsoft’s data loss prevention (DLP) policy enforcement framework to third-party cloud apps—such as Dropbox, Box, Google Drive, Webex, and more—for a consistent and seamless compliance experience Customers struggle to keep up with the constantly changing regulations around data protection. 



Blockchain / Distributed Ledger Technology (DLT)

Blockchain technologies including DLTs are a wonderful example how an ingenious combination of several (known) technologies was able (in 2009) to create a wholly new approach to a very old (database) problem: namely, how to reliably replicate state in an unreliable or even adversarial environment. The generalization of the notions of (i) crypto currencies (such as Bitcoin) to wholly generic crypto assets and (ii) of simple crypto token-moving transactions into smart contracts executing between untrusting parties goes beyond naïve database paradigms such as stored procedures. Today, many different DLTs exist, each optimizing different sets of nonfunctional requirements. Furthermore, the so-called “blockchain trilemma” of simultaneously providing scalability, security, and decentralization, has not been fully solved today. (Bitcoin providing ca. 5 transactions per second, Ethereum ca. 10 tps). Blockchain and DLTs are still a considerably overhyped technology looking for business problems they solve better than any existing alternative (e.g., a central SaaS). Despite many claims to the contrary, almost no real productive use cases exist except crypto exchanges.


Blockchain’s untapped potential in revolutionising procurement

Ardent supporters of this technology argue that it is the most significant innovation since the dawn of the internet. Today, blockchain technology has found adoption in nearly every industry, including retail, healthcare and manufacturing. Blockchain technology started in 2008 as a platform on which cryptocurrencies, such as bitcoin, function. Since then blockchain technology has undergone continuous improvement, finding numerous use-cases and applications. Don & Alex Tapscott, authors of Blockchain Revolution (2016), describe blockchain as “an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value”. Utilizing sophisticated algorithms, it maintains an immutable log of information and is able to securely transfer digital assets between network participants. The distributed ledger is accessible to all nodes on the network and everyone is able to access the same information. New information can be appended but the original data cannot be altered.





Quote for the day:

"The role of leadership is to transform the complex situation into small pieces and prioritize them." -- Carlos Ghosn

Daily Tech Digest - September 24, 2020

What’s the deal with cross-border data transfers after Brexit?

It remains unclear whether the UK will receive an adequacy decision after the end of the Brexit transition period. The main legal argument in favour of the UK receiving an adequacy decision is that no other Third Country has laws that are as similar to the GDPR as the Data Protection Act 2018. Since the EU has already granted adequacy decisions to several jurisdictions that have less similar laws, the argument goes that the UK is the most deserving candidate for an adequacy decision. The main legal argument against the UK receiving an adequacy decision is that the UK conducts extensive surveillance for the purposes of national security, and that this is the same activity that resulted in the Privacy Shield being overturned by the CJEU in Schrems II. On 16 September 2020, the European Parliament, released comments on the Schrems II decision, in which it formally acknowledged the argument that the UK might not receive an adequacy decision due to its national security surveillance activities. This also creates doubts as to whether existing adequacy decisions will be impacted in jurisdictions that have laws that are much less similar to the GDPR, and that have significant national security operations.


CQRS Is an Anti-Pattern for DDD

CQRS conflicts with one of the main principles for writing software – low coupling. “If changing one module in a program requires changing another module, then coupling exists”. Almost every pattern in software is referring to this problem directly or indirectly. How do you divide your system into components, in such a way, that you can change one component with minimum impact on the other components? Or what is the right responsibility in the Single Responsibility Principle? It is really hard for me to accept, that you can evolve the read and write part of the system separately. Reading and writing are not the right responsibilities for building domain models, because business people do not think in terms of reading and writing. The real value lies in process flows. Only the most minor changes in a process flow would affect only the read or only the write part of the domain model. Maybe you are thinking of my example with the marketing application? It does sound a bit like a CRUD application, right? Not the best candidate for CQRS. Well, there were indeed more complex requirements in my original project. For example, when you assign a salesperson to a customer, the system must decide, whether he/she is the primary salesperson or a supporting salesperson.


Working with Local Storage in a Blazor Progressive Web App

Fortunately, accessing local storage is easy once you've added Chris Sainty's Blazored.LocalStorage NuGet Package to your application (the project and its documentation can be found on GitHub). Before anything else, to use Sainty's package, you need to add it your project's Services collection. Normally, I'd do that in my project's Startup class but the Visual Studio template for a PWA doesn't include a Startup class. So, in a PWA, you'll need to add Sainty's package to the Services collection in the Program.cs file. The Program.cs file in the PWA template already includes code to add an HttpClient to the Services collection. You can add Sainty's package by tacking on a call to his AddBlazoredLocalStorage extension method ... It's easy to check to see what's in local storage: Press F12 to bring up the Developer's tools panel in either the browser or PWA version of your app, click on the Application tab (which may be hidden under the tools overflow menu icon), and select Storage from the left-hand list. While the code is straightforward, I found debugging the resulting application ... problematic. 


Credential stuffing is just the tip of the iceberg

Credential stuffing attacks are a key concern for good reason. High profile breaches—such as those of Equifax and LinkedIn, to name two of many—have resulted in billions of compromised credentials floating around on the dark web, feeding an underground industry of malicious activity. For several years now, about 80% of breaches that have resulted from hacking have involved stolen and/or weak passwords, according to Verizon’s annual Data Breach Investigations Report. Additionally, research by Akamai determined that three-quarters of credential abuse attacks against the financial services industry in 2019 were aimed at APIs. Many of those attacks are conducted on a large scale to overwhelm organizations with millions of automated login attempts. The majority of threats to APIs move beyond credential stuffing, which is only one of many threats to APIs as defined in the 2019 OWASP API Security Top 10. In many instances they are not automated, are much more subtle and come from authenticated users. APIs, which are essential to an increasing number of applications, are specialized entities performing particular functions for specific organizations. Someone exploiting a vulnerability in an API used by a bank, retailer or other institution could, with a couple of subtle calls, dump the database, drain an account, cause an outage or do all kinds of other damage to impact revenue and brand reputation.


CISA: LokiBot Stealer Storms Into a Resurgence

“LokiBot has stolen credentials from multiple applications and data sources, including Windows operating system credentials, email clients, File Transfer Protocol and Secure File Transfer Protocol clients,” according to the alert, issued Tuesday. “LokiBot has [also] demonstrated the ability to steal credentials from…Safari and Chromium and Mozilla Firefox-based web browsers.” To boot, LokiBot can also act as a backdoor into infected systems to pave the way for additional payloads. Like its Viking namesake, LokiBot is a bit of a trickster, and disguises itself in diverse attachment types, sometimes using steganography for maximum obfuscation. For instance, the malware has been disguised as a .ZIP attachment hidden inside a .PNG file that can slip past some email security gateways, or hidden as an ISO disk image file attachment. It also uses a number of application guises. Since LokiBot was first reported in 2015, cyber actors have used it across a range of targeted applications,” CISA noted. For instance, in February, it was seen impersonating a launcher for the popular Fortnite video game. Other tactics include the use of zipped files along with malicious macros in Microsoft Word and Excel, and leveraging the exploit CVE-2017-11882.


Does Cybersecurity Have a Public Image Problem?

“In effect, the portrayal in media assigns an attribute of quick decisive thinking to the process – an attribute that potential cybersecurity candidates might not view themselves as possessing,” he said. “The reality is that most cybersecurity incidents aren’t as adversarial as portrayed on TV, and that two of the most important skills to become a professional in a cybersecurity discipline are strong problem solving abilities and attention to detail.” Chris Hauk, consumer privacy champion at Pixel Privacy, argued that “most people think cybersecurity involves maneuvering a 3D maze filled with grinning skeletons that represent malware that must be zapped by the BFG virus zapper” rather than applying patches to keep operating systems and applications up-to-date and ensuring a firewall is blocking what it is supposed to be guarding against. “It is all character based or a bit of point and click, and quite boring.” He claimed that a lot of the skills for cybersecurity mostly consist of common sense, and this means guarding yourself against everyday threats on the internet by running anti-virus and anti-malware protection, and avoiding clicking on links and attachments in email and text messages.”


Microservices: 5 Questions to Ask before Making that Decision

When it comes to Microservices, the success stories and the concepts are truly mesmerizing. Having a collection of services of each doing one thing in the business domain builds a perfect image of a lean architecture. However, we shouldn’t forget that these services need to work together to deliver business value to their end-users. ... Knowing the business domain inside out and the experience with the domain-driven design is crucial to identify the bounded context of each service. Since we allocate teams per each Microservice and allow them to work with minimal interference, getting the bounded context wrong would increase the communication overhead and inter-team dependencies, impacting the overall development speed. So for a project starting from scratch, selecting Microservices is a risky move. ... Microservices isn’t a silver bullet or a superb architecture style that is for everyone. Since we need to deal with distributed systems, it could be an overkill for many. Therefore, it’s essential to assess whether the issues you are experiencing with the Monolith are solvable by Microservices.


To Deliver Better Customer Experience Brands Need To Develop An Empathetic Musculature

To become more empathetic brands need to start thinking holistically about it. In fact, I believe, that they need to start thinking about developing an empathetic musculature for their organization, a concept that I started musing about in Punk CX. If they don't then, according to Rana el Kaliouby, CEO of Affectiva, the danger is that "the need to build empathy will get reduced down to a training course." So, what's it going to take to build an empathetic musculature at an organizational level? Well, if you look up 'musculature' in the dictionary, it is defined as 'the system or arrangement of muscles in a body or a body part.' So, to develop muscles, you have to train. But, you have to train with a purpose whether that is to stay fit, lose weight, rehabilitate after an injury or to compete.  This will take time, discipline and commitment as it is both a habit and capability that we will need to develop, nurture and maintain if we are to see the benefits. That, in turn, will require strategy, systems, processes, design, technology, leadership and the right sort of people and training to help us get there. Without a doubt, it will be hard, and we won't necessarily get it right first time.


The perseverance of resilient leadership: Sustaining impact on the road to Thrive

As leaders, we need to empathize with and acknowledge the myriad challenges our people are currently coping with—many of which have no end in sight. Psychologists describe “ambiguous loss” as losses that are inexplicable, outside one’s control, and have no definitive endpoint.3 Typically experienced when loved ones are missing or suffering from progressive chronic illness, the uncertainties our colleagues are enduring today surely also constitute ambiguous loss:4 The loss of our familiar way of being in the world is difficult to understand, beyond our control, and uncertain as to when we can return to some semblance of normal. As we discuss in our Bridge across uncertainty guide for leaders, there are three types of stress: good stress, tolerable stress, and toxic stress, the last of which is critical to relieve before people become overwhelmed.5 With both ambiguous loss and toxic stress, the better definition of an endpoint and a reduction in uncertainty are important ways we can support our teams. For example, Deloitte has hosted Zoom-based workshops where a cross-section of our people helped to inform return-to-the workplace programs—giving them a greater sense of control.


Q&A on the Book- Problem? What Problem? with Ben Linders

If an organization is working in an agile way, their approach to solving problems should also be agile-based. It has to fit in and be congruent with the company's and people's agile mindset to be effective. What does problem-solving look like when we are using an agile mindset and agile thinking? Here's my view on this. Many problems relate to the way people work together. Where every person does the best they can, problems often arise when things come together. Problem-solving practices should help us to understand how individuals interact and to solve collaboration issues. There are often too many problems to solve. We need to focus our effort on solving impediments that have the biggest impact on outcomes. Solve the ones that affect our ability to deliver something that is working, right now.  Collaboration is key, not only within teams but also between teams and when working with stakeholders. Problem-solving practices should enable us to visualize the system and collaboratively look for solutions. They should engage people from the start and enable them to self-organize and come up with solutions that work for them. While we're working on a problem, things will change. We'll learn new things along the way. 



Quote for the day:

"The signs of outstanding leadership are found among the followers." -- Max DePree

Daily Tech Digest - September 23, 2020

If we put computers in our brains, strange things might happen to our minds

The difference between having a tool in your hand and having a brain-computer interface -- essentially just another tool, albeit an advanced one -- is that the BCI goes directly to the neurons that are helping you interact with the world, says Justin Sanchez, a tech fellow at the Battelle Memorial Institute. "So the potential for those neurons to be directly adapted for the brain computer interface is that much higher [than with other tools]… there is adaptation or plasticity of your neurons when you use a brain interface and that plasticity can change in a wide variety of ways depending upon who you are," he says. Research published last year found that even the use of a non-invasive BCI (where brain signals are read by sensors worn on, rather than in, the head) for a short time can induce brain plasticity. The study, which asked people to imagine particular movements, found changes after just one hour of use.  The brain's ability to rewire itself in this way can come in particularly handy in people who've had damage to their nervous systems -- for example, in people who've had strokes or spinal cord injuries.  That plasticity is particularly pertinent for BCIs, as researchers are hoping to use the systems to help people with brain and spinal cord injuries to overcome paralysis of their limbs or a lost sense of touch in parts of their body.


Zerologon explained: Why you should patch this critical Windows Server flaw now

Zerologon, tracked as CVE-2020-1472, is an authentication bypass vulnerability in the Netlogon Remote Protocol (MS-NRPC), a remote procedure call (RPC) interface that Windows uses to authenticate users and computers on domain-based networks. It was designed for specific tasks such as maintaining relationships between members of domains and the domain controller (DC), or between multiple domain controllers across one or multiple domains and replicating the domain controller database. One of Netlogon's features is that it allows computers to authenticate to the domain controller and update their password in the Active Directory, and it's this particular feature that makes the Zerologon flaw dangerous. In particular, the vulnerability allows an attacker to impersonate any computer to the domain controller and change their password, including the password of the domain controller itself. This results in the attacker gaining administrative access and taking full control of the domain controller and therefore the network. Zerologon is a privilege escalation vulnerability and is rated as critical by Microsoft even though the company said in the original advisory that exploitation was less likely. 


14 open source tools to make the most of machine learning

Apache Mahout provides a way to build environments for hosting machine learning applications that can be scaled quickly and efficiently to meet demand. Mahout works mainly with another well-known Apache project, Spark, and was originally devised to work with Hadoop for the sake of running distributed applications, but has been extended to work with other distributed back ends like Flink and H2O. ... Apple’s Core ML framework lets you integrate machine learning models into apps, but uses its own distinct learning model format. The good news is you don’t have to pretrain models in the Core ML format to use them; you can convert models from just about every commonly used machine learning framework into Core ML with Core ML Tools. Core ML Tools runs as a Python package, so it integrates with the wealth of Python machine learning libraries and tools. Models from TensorFlow, PyTorch, Keras, Caffe, ONNX, Scikit-learn, LibSVM, and XGBoost can all be converted. Neural network models can also be optimized for size by using post-training quantization (e.g., to a small bit depth that’s still accurate).


Easing the pressures of new technologies on the Internet

One constant we have witnessed over the history of the Internet is that when underlying technologies improve, the new experiences they enable quickly follow, taking full advantage of the new technology and pushing it to its limits. As more and more devices are able to connect to the Internet at ever higher speeds, including through 5G connectivity, the demand for online content will grow dramatically. Much of this traffic will be video-heavy and delivered in high definition. For example, Analysys Mason predicts that 5G will be a significant enabler of cloud gaming due to the lower latencies and higher speeds it offers. Video delivered at faster frame rates and the need for 360-degree content for the growing use of AR and VR is likely to result in around four times as much traffic as typical video. Another example is streaming of live sports events. The 2019 VIVO Indian Premier League cricket tournament set records for reported online viewership, exceeding the total 2018 viewership within the first three weeks of the 2019 tournament. In fact, the final saw 18.6 million concurrent viewers, an increase of 80% over the previous year – and with 91% watching via mobile.


How Automation is changing the landscape of Enterprises?

Convenience is a great category for this. However, in larger retail environments or when the packaging is less structured, other experiences have limited friction enabled by much less costly technology. For example, in Europe, it is common to see retailers that provide mobile self-scanning solutions or banks of modular self-checkout stands, which allow customers to eliminate the wait time they typically encounter at a traditional checkout. Technology improvements in computer vision have also helped start-ups develop shopping carts that can automatically identify products as they are placed within the carts, creating yet another option. One truth that will remain constant in retail is that customer convenience is a core value proposition, so limiting friction in the buying experience will always have a place in the market. ... The next evolution is to leverage artificial intelligence technologies like machine learning, computer vision, natural language processing, prescriptive analytics, and others to further eliminate the cognitive load on process execution. In the short term, roles focused on repetitive tasks, especially in what has typically termed back-office functions that do not directly interact with shoppers, patients, and customers will be the most impacted by RPA.


What is Intelligent Automation

Just like the machines replacing humans in industries Intelligent Automation solutions have started replacing humans in every industry, freeing their time for more creative and innovative tasks. Areas including Marketing & Sales, Human Resources, Customer support, Finance, IT support, Business Process Management and Operations Excellence are using Intelligent Automation to drive more value. In recent years these emerging technologies have gained substantial momentum. This, in turn, increased the number of technology firms and venture investors shifting their attention towards implementing intelligent automation solutions. Major automakers like Audi, BMW, Mercedes Benz, Volvo and Nissan are planning to introduce autonomous vehicles that use IA. IBM’s Watson processes huge amounts of textual information in order to respond quickly towards complex requests for medical treatment plans. IA is used in commercial processes like a marketing system which avails offers for customers based on their preferences, credit card processing system which helps in detecting fraudulent activities and so on.


Microsoft announces Power BI Teams integration, NLP and per-user Premium subscription

While Microsoft is playing catch-up here with other BI products that already offered narrative summarizations, it has worked hard to integrate its own implementation fully into the Power BI paradigm. The feature is surfaced through a drag-and-drop visual that is contextually updated when the underlying data changes through a filter, a slicer or the cross-filtering that takes place when a data element in another visual is selected. This makes the learning curve negligible for existing Power BI users. And combining smart narratives with "Q&A" natural language query capabilities will make Power BI a now strong contender in the augmented analytics arena. Another major area of enhancement to Power BI's usability comes in the form of a dedicated Power BI add-in application for Microsoft's Teams collaboration platform, released as a preview. The Teams integration includes the ability to browse reports, dashboards and workspaces and directly embed links to them in Teams channel chats. It's not just about linking though, as Teams users can also browse Power BI datasets, both through an alphabetical listing of them or by reviewing a palette of recommended ones. In both cases, datasets previously marked as Certified or Promoted will be identified as such, and Teams users will have the ability view their lineage, generate template-based reports on them, or just analyze their data in Excel.


Adopting interaction analytics to improve contact centre performance

Interaction analytics allows organisation’s to analyse 100% of calls or text-based conversations that come into the contact centre, automatically. By adopting this technology, with the right partner, organisation’s are moving away from relying on an from an inconsistent and subjective sample, to a holistic, consistent and objective view. “Analytics technologies allows organisation’s to take away the manual effort of monitoring contact centre performance and let technology guide everything, from the calls of interest, to the issues of interest, to the opportunities, to the challenges and the complaints. Interaction analytics provides a holistic view of what’s really going on with contact centre interactions,” continued Sherlock. The return on investment is also multiple, both in terms of cost savings (there’s no need for people to manage or monitor every call, as interaction analytics automates the experience) and in identifying new revenue opportunities. “Analytics can help produce sales opportunities and allow organisation’s to collect more revenue by upselling to the customer base — a holistic view of interactions will allow those in sales to see where they’re getting consistent issues being spoken about a particular product or service and then go back to source to modify that product/service,” explained Sherlock.


What Does an Enterprise Architect Do Exactly?

Enterprise architects are responsible for planning how to use and manage all the IT functions of an organization. They must find a way to make them affordable and efficient as possible. It’s up to the enterprise architect to develop the plan. They have a great deal of freedom in deciding what is best. They must balance this against the needs of the organization they work for and its customers. The plan must be the most effective utilization of enterprise architecture possible. It must address any issues the organization currently faces. If there’s a better way to use available resources, it should be included. Each plan an enterprise architect comes up with must align with the goals of the business they work for. Perhaps they want to decrease the time it takes to send and receive information. Switching to a faster server may be an effective strategy. Enterprise architects must also be able to communicate their plans to everyone else. Anyone who doesn’t understand all the steps won’t be able to implement them in their work. Creating a strategy for how to manage an organization’s IT is only the first step. After that, it’s the enterprise architect’s responsibility to implement it.


Enterprise architecture strategy experts offer pandemic tips

Right now, the information needs to be sharper, and it needs to be opinionated. Don't reinvent the wheel. Use the existing artifacts you have. Insert yourself as the epicenter of actionable information and sharpen the insights. You really want to drive the needle and come to the table opinionated. Don't overwhelm your stakeholders with options. Your business model canvases and capability maps are great in EA but far too detailed for the distracted executive of today. So, we're pivoting into executive onboarding dossiers. When new executives come on board, we give them almost a CliffsNotes version, and it saves them hours. Many other examples of your application portfolios can be turned into run books, succession plans and flex workforce plans. The key takeaway is we want to keep EA relevant. It's about adapting to the times, sharpening your narrative with the business and not being afraid to step on some toes. ... Lots of organizations are realizing that business capability models are the most powerful areas they can attack as they struggle with COVID. They're identifying and focusing on the most important capabilities to help them survive through the pandemic and then throwing in a couple of capabilities that differentiate the organization when we come to the other side of COVID.



Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad

Daily Tech Digest - September 22, 2020

How industrial AI will power the refining industry in the future

The ultimate vision for the industry is the self-optimising, autonomous plant – and the increasing deployment of artificial intelligence (AI) across the sector is bringing the reality of this ever closer. However, while refining has been an early adopter of many digital tools, the industry is yet to fully realise the potential of industrial AI. That is, in no small part, because AI and machine learning are too often looked at in isolation, rather than being combined with existing engineering capabilities – models, tools and expertise, to deliver a practical solution that effectively optimises refinery assets. ... Machine learning is used to create the model, leveraging simulation, plant or pilot plant data. The model also uses domain knowledge, including first principles and engineering constraints, to build an enriched model — without requiring the user to have deep process expertise or be an AI expert. The solutions supported by hybrid models act as a bridge between the first principles-focused world of the past and the “smart refinery” environment of the future. They are the essential catalyst helping to enable the self-optimising plant.


Microsoft's new feature uses AI to make video chat less weird

Eye Contact uses the custom artificial intelligence (AI) engine in the Surface Pro X's SQ1 SOC, so you shouldn't see any performance degradation, as much of the complex real-time computational photography is handed off to it and to the integrated GPU. Everything is handled at a device driver level, so it works with any app that uses the front-facing camera -- it doesn't matter if you're using Teams or Skype or Slack or Zoom, they all get the benefit. There's only one constraint: the Surface Pro X must be in landscape mode, as the machine learning model used in Eye Contact won't work if you hold the tablet vertically. In practice that shouldn't be much of an issue, as most video-conferencing apps assume that you're using a standard desktop monitor rather than a tablet PC, and so are optimised for landscape layouts. The question for the future is whether this machine-learning approach can be brought to other devices. Sadly it's unlikely to be a general-purpose solution for some time; it needs to be built into the camera drivers and Microsoft here has the advantage of owning both the camera software and the processor architecture in the Surface Pro X.


Digital transformation: 5 ways the pandemic forced change

Zemmel says that the evolution of the role of the CIO has been accelerated as well. He sees CIOs increasingly reporting to the CEO because they increasingly have a dual mandate. In addition to their historical operational role running the IT department, they now are also customer-facing and driving revenue. That mandate is not new for forward-looking IT organizations, but the pandemic has made other organizations hyper-aware of IT’s role in driving change quickly. CIOs are becoming a sort of “chief influencing officer who is breaking down silos and driving adoption of digital products,” Zemmel adds. Experian’s Libenson puts it this way: “The pandemic has forced us to be closer to the business than before. We had a seat at the table before. But I think we will be a better organization after this.” The various panelists gave nods to the role of technology, especially the use of data; Zemmel describes the second generation of B2B digital selling as “capturing the ‘digital exhaust’ to drive new analytic insights and using data to drive performance and create more immersive experiences.”


Diligent Engine: A Modern Cross-Platform Low-Level Graphics Library

Graphics APIs have come a long way from a small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. The next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard.  ... This article describes Diligent Engine, a light-weight cross-platform graphics API abstraction layer that is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C/C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. The full source code is available for download at GitHub and is free to use.


Supporting mobile workers everywhere

It is amazing how quickly video conferencing has been accepted as part of the daily routine. Such is the success of services like Zoom that CIOs need to reassess priorities. In a workforce where people are working from home regularly, remote access is not limited to a few, but must be available to all. Mobile access and connectivity for the mobile workforce needs to extend to employees’ homes. Traditional VPN access has scalability limitations and is inefficient when used to provide access to modern SaaS-based enterprise applications. To reach all home workers, some organisations are replacing their VPNs with SD-WANs. There is also an opportunity to revisit bring-your-own-device (BYOD) policies. If people have access to computing at home and their devices can be secured, then CIOs should question the need to push out corporate laptops to home workers. While IT departments may have traditionally deployed virtual desktop infrastructure (VDI) to stream business applications to thin client devices, desktop as a service (DaaS) is a natural choice to delivering a managed desktop environment to home workers. For those organisations that are reluctant to use DaaS in the public cloud, as Oxford University Social Sciences Division (OSSD) has found (see below), desktop software can easily be delivered in a secure and manageable way using containers.


Secure data sharing in a world concerned with privacy

Compliance costs and legal risks are prompting companies to consider an innovative data sharing method based on PETs: a new genre of technologies which can help them bridge competing privacy frameworks. PETs are a category of technologies that protect data along its lifecycle while maintaining its utility, even for advanced AI and machine learning processes. PETs allow their users to harness the benefits of big data while protecting personally identifiable information (PII) and other sensitive information, thus maintaining stringent privacy standards. One such PET playing a growing role in privacy-preserving information sharing is Homomorphic Encryption (HE), a technique regarded by many as the holy grail of data protection. HE enables multiple parties to securely collaborate on encrypted data by conducting analysis on data which remains encrypted throughout the process, never exposing personal or confidential information. Through HE, companies can derive the necessary insights from big data while protecting individuals’ personal details – and, crucially, while remaining compliant with privacy legislation because the data is never exposed.



When -- and when not -- to use cloud native security tools

Cloud native security tools like Amazon Inspector and Microsoft Azure Security Center automatically inspect the configuration of common types of cloud workloads and generate alerts when potential security problems are detected. Google Cloud Data Loss Prevention and Amazon Macie provide similar functionality for data by automatically detecting sensitive information that is not properly secured and alerting the user. To protect data even further there are tools, such as Amazon GuardDuty and Azure Advanced Threat Protection, that monitor for events that could signal security issues within cloud-based and on-premises environments. ... IT teams use services like Google Cloud Armor, AWS Web Application Firewall and Azure Firewall to configure firewalls that control network access to applications running in the cloud. Related tools provide mitigation against DDoS attacks that target cloud-based resources. ... Data stored on the major public clouds can be encrypted electively -- or is encrypted automatically by default -- using native functionality built into storage services like Amazon S3 and Azure Blob Storage. Public cloud vendors also offer cloud-based key management services, like Azure Key Vault and Google Key Management Service, for securely keeping track of encryption keys.


Four Case Studies for Implementing Real-Time APIs

Unreliable or slow performance can directly impact or even prevent the adoption of new digital services, making it difficult for a business to maximize the potential of new products and expand its offerings. Thus, it is not only crucial that an API processes calls at acceptable speeds, but it is equally important to have an API infrastructure in place that is able to route traffic to resources correctly, authenticate users, secure APIs, prioritize calls, provide proper bandwidth, and cache API responses.  Most traditional APIM solutions were made to handle traffic between servers in the data center and the client applications accessing those APIs externally (north-south traffic). They also need constant connectivity between the control plane and data plane, which requires using third-party modules, scripts, and local databases. Processing a single request creates significant overhead — and it only gets more complex when dealing with the east-west traffic associated with a distributed application.  Considering that a single transaction or request could require multiple internal API calls, the bank found it extremely difficult to deliver good user experiences to their customers.


Building the foundations of effective data protection compliance

Data protection by design and default needs to be planned within the whole system, depending on the type of data and how much data a business has. Data classification is the categorization of data according to its level of sensitivity or value, using labels. These are attached as visual markings and metadata within the file. When classification is applied the metadata ensures that the data can only be accessed or used in accordance with the rules that correspond with its label. Businesses need to mitigate attacks and employee mistakes by starting with policy - assessing who has access. Then they should select a tool that fits the policy, not the other way round; you should never be faced with selecting a tool and then having to rewrite your policy to fit it. This will then support users with automation and labelling which will enhance the downstream technology. Once data is appropriately classified, security tools such as Data Loss Prevention (DLP), policy-based email encryption, access control and data governance tools are exponentially more effective, as they can access the information provided by the classification label and metadata that tells them how data should be managed and protected.


Q&A on the Book Fail to Learn

People often fear failure because of the stakes associated with it. When we create steep punishment systems and “one-strike-you’re-out” rules, it’s only natural to be terrified of messing up. This is where we need to think more like game designers. Games encourage trial and error because the cost of starting over in a game is practically nothing. If I die playing Halo, I get to respawn and try again immediately. We need to create more “respawn” options in the rest of our lives. This is something that educators can do in their course design. But it’s also something we can encourage as managers, company leaders, or simply as members of society. The best way to do this is to start talking more about our mistakes. These are things we should be able to celebrate, laugh over, shake our collective heads at, and eventually grow from. ... If we go back to people like Dyson and Edison, you see failure-to-success ratios that reach five-thousand or even ten-thousand to one. A venture capitalist who interviewed hundreds of CEOs arrived at the same ratio for start-up companies making it big: about a 10,000:1 failure-to-success ratio. Now, we probably don’t need that many failures in every segment of our lives, but think about how far off most of us are from these numbers.



Quote for the day:

"Leaders need to be optimists. Their vision is beyond the present." -- Rudy Giuliani