Daily Tech Digest - September 24, 2020

What’s the deal with cross-border data transfers after Brexit?

It remains unclear whether the UK will receive an adequacy decision after the end of the Brexit transition period. The main legal argument in favour of the UK receiving an adequacy decision is that no other Third Country has laws that are as similar to the GDPR as the Data Protection Act 2018. Since the EU has already granted adequacy decisions to several jurisdictions that have less similar laws, the argument goes that the UK is the most deserving candidate for an adequacy decision. The main legal argument against the UK receiving an adequacy decision is that the UK conducts extensive surveillance for the purposes of national security, and that this is the same activity that resulted in the Privacy Shield being overturned by the CJEU in Schrems II. On 16 September 2020, the European Parliament, released comments on the Schrems II decision, in which it formally acknowledged the argument that the UK might not receive an adequacy decision due to its national security surveillance activities. This also creates doubts as to whether existing adequacy decisions will be impacted in jurisdictions that have laws that are much less similar to the GDPR, and that have significant national security operations.


CQRS Is an Anti-Pattern for DDD

CQRS conflicts with one of the main principles for writing software – low coupling. “If changing one module in a program requires changing another module, then coupling exists”. Almost every pattern in software is referring to this problem directly or indirectly. How do you divide your system into components, in such a way, that you can change one component with minimum impact on the other components? Or what is the right responsibility in the Single Responsibility Principle? It is really hard for me to accept, that you can evolve the read and write part of the system separately. Reading and writing are not the right responsibilities for building domain models, because business people do not think in terms of reading and writing. The real value lies in process flows. Only the most minor changes in a process flow would affect only the read or only the write part of the domain model. Maybe you are thinking of my example with the marketing application? It does sound a bit like a CRUD application, right? Not the best candidate for CQRS. Well, there were indeed more complex requirements in my original project. For example, when you assign a salesperson to a customer, the system must decide, whether he/she is the primary salesperson or a supporting salesperson.


Working with Local Storage in a Blazor Progressive Web App

Fortunately, accessing local storage is easy once you've added Chris Sainty's Blazored.LocalStorage NuGet Package to your application (the project and its documentation can be found on GitHub). Before anything else, to use Sainty's package, you need to add it your project's Services collection. Normally, I'd do that in my project's Startup class but the Visual Studio template for a PWA doesn't include a Startup class. So, in a PWA, you'll need to add Sainty's package to the Services collection in the Program.cs file. The Program.cs file in the PWA template already includes code to add an HttpClient to the Services collection. You can add Sainty's package by tacking on a call to his AddBlazoredLocalStorage extension method ... It's easy to check to see what's in local storage: Press F12 to bring up the Developer's tools panel in either the browser or PWA version of your app, click on the Application tab (which may be hidden under the tools overflow menu icon), and select Storage from the left-hand list. While the code is straightforward, I found debugging the resulting application ... problematic. 


Credential stuffing is just the tip of the iceberg

Credential stuffing attacks are a key concern for good reason. High profile breaches—such as those of Equifax and LinkedIn, to name two of many—have resulted in billions of compromised credentials floating around on the dark web, feeding an underground industry of malicious activity. For several years now, about 80% of breaches that have resulted from hacking have involved stolen and/or weak passwords, according to Verizon’s annual Data Breach Investigations Report. Additionally, research by Akamai determined that three-quarters of credential abuse attacks against the financial services industry in 2019 were aimed at APIs. Many of those attacks are conducted on a large scale to overwhelm organizations with millions of automated login attempts. The majority of threats to APIs move beyond credential stuffing, which is only one of many threats to APIs as defined in the 2019 OWASP API Security Top 10. In many instances they are not automated, are much more subtle and come from authenticated users. APIs, which are essential to an increasing number of applications, are specialized entities performing particular functions for specific organizations. Someone exploiting a vulnerability in an API used by a bank, retailer or other institution could, with a couple of subtle calls, dump the database, drain an account, cause an outage or do all kinds of other damage to impact revenue and brand reputation.


CISA: LokiBot Stealer Storms Into a Resurgence

“LokiBot has stolen credentials from multiple applications and data sources, including Windows operating system credentials, email clients, File Transfer Protocol and Secure File Transfer Protocol clients,” according to the alert, issued Tuesday. “LokiBot has [also] demonstrated the ability to steal credentials from…Safari and Chromium and Mozilla Firefox-based web browsers.” To boot, LokiBot can also act as a backdoor into infected systems to pave the way for additional payloads. Like its Viking namesake, LokiBot is a bit of a trickster, and disguises itself in diverse attachment types, sometimes using steganography for maximum obfuscation. For instance, the malware has been disguised as a .ZIP attachment hidden inside a .PNG file that can slip past some email security gateways, or hidden as an ISO disk image file attachment. It also uses a number of application guises. Since LokiBot was first reported in 2015, cyber actors have used it across a range of targeted applications,” CISA noted. For instance, in February, it was seen impersonating a launcher for the popular Fortnite video game. Other tactics include the use of zipped files along with malicious macros in Microsoft Word and Excel, and leveraging the exploit CVE-2017-11882.


Does Cybersecurity Have a Public Image Problem?

“In effect, the portrayal in media assigns an attribute of quick decisive thinking to the process – an attribute that potential cybersecurity candidates might not view themselves as possessing,” he said. “The reality is that most cybersecurity incidents aren’t as adversarial as portrayed on TV, and that two of the most important skills to become a professional in a cybersecurity discipline are strong problem solving abilities and attention to detail.” Chris Hauk, consumer privacy champion at Pixel Privacy, argued that “most people think cybersecurity involves maneuvering a 3D maze filled with grinning skeletons that represent malware that must be zapped by the BFG virus zapper” rather than applying patches to keep operating systems and applications up-to-date and ensuring a firewall is blocking what it is supposed to be guarding against. “It is all character based or a bit of point and click, and quite boring.” He claimed that a lot of the skills for cybersecurity mostly consist of common sense, and this means guarding yourself against everyday threats on the internet by running anti-virus and anti-malware protection, and avoiding clicking on links and attachments in email and text messages.”


Microservices: 5 Questions to Ask before Making that Decision

When it comes to Microservices, the success stories and the concepts are truly mesmerizing. Having a collection of services of each doing one thing in the business domain builds a perfect image of a lean architecture. However, we shouldn’t forget that these services need to work together to deliver business value to their end-users. ... Knowing the business domain inside out and the experience with the domain-driven design is crucial to identify the bounded context of each service. Since we allocate teams per each Microservice and allow them to work with minimal interference, getting the bounded context wrong would increase the communication overhead and inter-team dependencies, impacting the overall development speed. So for a project starting from scratch, selecting Microservices is a risky move. ... Microservices isn’t a silver bullet or a superb architecture style that is for everyone. Since we need to deal with distributed systems, it could be an overkill for many. Therefore, it’s essential to assess whether the issues you are experiencing with the Monolith are solvable by Microservices.


To Deliver Better Customer Experience Brands Need To Develop An Empathetic Musculature

To become more empathetic brands need to start thinking holistically about it. In fact, I believe, that they need to start thinking about developing an empathetic musculature for their organization, a concept that I started musing about in Punk CX. If they don't then, according to Rana el Kaliouby, CEO of Affectiva, the danger is that "the need to build empathy will get reduced down to a training course." So, what's it going to take to build an empathetic musculature at an organizational level? Well, if you look up 'musculature' in the dictionary, it is defined as 'the system or arrangement of muscles in a body or a body part.' So, to develop muscles, you have to train. But, you have to train with a purpose whether that is to stay fit, lose weight, rehabilitate after an injury or to compete.  This will take time, discipline and commitment as it is both a habit and capability that we will need to develop, nurture and maintain if we are to see the benefits. That, in turn, will require strategy, systems, processes, design, technology, leadership and the right sort of people and training to help us get there. Without a doubt, it will be hard, and we won't necessarily get it right first time.


The perseverance of resilient leadership: Sustaining impact on the road to Thrive

As leaders, we need to empathize with and acknowledge the myriad challenges our people are currently coping with—many of which have no end in sight. Psychologists describe “ambiguous loss” as losses that are inexplicable, outside one’s control, and have no definitive endpoint.3 Typically experienced when loved ones are missing or suffering from progressive chronic illness, the uncertainties our colleagues are enduring today surely also constitute ambiguous loss:4 The loss of our familiar way of being in the world is difficult to understand, beyond our control, and uncertain as to when we can return to some semblance of normal. As we discuss in our Bridge across uncertainty guide for leaders, there are three types of stress: good stress, tolerable stress, and toxic stress, the last of which is critical to relieve before people become overwhelmed.5 With both ambiguous loss and toxic stress, the better definition of an endpoint and a reduction in uncertainty are important ways we can support our teams. For example, Deloitte has hosted Zoom-based workshops where a cross-section of our people helped to inform return-to-the workplace programs—giving them a greater sense of control.


Q&A on the Book- Problem? What Problem? with Ben Linders

If an organization is working in an agile way, their approach to solving problems should also be agile-based. It has to fit in and be congruent with the company's and people's agile mindset to be effective. What does problem-solving look like when we are using an agile mindset and agile thinking? Here's my view on this. Many problems relate to the way people work together. Where every person does the best they can, problems often arise when things come together. Problem-solving practices should help us to understand how individuals interact and to solve collaboration issues. There are often too many problems to solve. We need to focus our effort on solving impediments that have the biggest impact on outcomes. Solve the ones that affect our ability to deliver something that is working, right now.  Collaboration is key, not only within teams but also between teams and when working with stakeholders. Problem-solving practices should enable us to visualize the system and collaboratively look for solutions. They should engage people from the start and enable them to self-organize and come up with solutions that work for them. While we're working on a problem, things will change. We'll learn new things along the way. 



Quote for the day:

"The signs of outstanding leadership are found among the followers." -- Max DePree

Daily Tech Digest - September 23, 2020

If we put computers in our brains, strange things might happen to our minds

The difference between having a tool in your hand and having a brain-computer interface -- essentially just another tool, albeit an advanced one -- is that the BCI goes directly to the neurons that are helping you interact with the world, says Justin Sanchez, a tech fellow at the Battelle Memorial Institute. "So the potential for those neurons to be directly adapted for the brain computer interface is that much higher [than with other tools]… there is adaptation or plasticity of your neurons when you use a brain interface and that plasticity can change in a wide variety of ways depending upon who you are," he says. Research published last year found that even the use of a non-invasive BCI (where brain signals are read by sensors worn on, rather than in, the head) for a short time can induce brain plasticity. The study, which asked people to imagine particular movements, found changes after just one hour of use.  The brain's ability to rewire itself in this way can come in particularly handy in people who've had damage to their nervous systems -- for example, in people who've had strokes or spinal cord injuries.  That plasticity is particularly pertinent for BCIs, as researchers are hoping to use the systems to help people with brain and spinal cord injuries to overcome paralysis of their limbs or a lost sense of touch in parts of their body.


Zerologon explained: Why you should patch this critical Windows Server flaw now

Zerologon, tracked as CVE-2020-1472, is an authentication bypass vulnerability in the Netlogon Remote Protocol (MS-NRPC), a remote procedure call (RPC) interface that Windows uses to authenticate users and computers on domain-based networks. It was designed for specific tasks such as maintaining relationships between members of domains and the domain controller (DC), or between multiple domain controllers across one or multiple domains and replicating the domain controller database. One of Netlogon's features is that it allows computers to authenticate to the domain controller and update their password in the Active Directory, and it's this particular feature that makes the Zerologon flaw dangerous. In particular, the vulnerability allows an attacker to impersonate any computer to the domain controller and change their password, including the password of the domain controller itself. This results in the attacker gaining administrative access and taking full control of the domain controller and therefore the network. Zerologon is a privilege escalation vulnerability and is rated as critical by Microsoft even though the company said in the original advisory that exploitation was less likely. 


14 open source tools to make the most of machine learning

Apache Mahout provides a way to build environments for hosting machine learning applications that can be scaled quickly and efficiently to meet demand. Mahout works mainly with another well-known Apache project, Spark, and was originally devised to work with Hadoop for the sake of running distributed applications, but has been extended to work with other distributed back ends like Flink and H2O. ... Apple’s Core ML framework lets you integrate machine learning models into apps, but uses its own distinct learning model format. The good news is you don’t have to pretrain models in the Core ML format to use them; you can convert models from just about every commonly used machine learning framework into Core ML with Core ML Tools. Core ML Tools runs as a Python package, so it integrates with the wealth of Python machine learning libraries and tools. Models from TensorFlow, PyTorch, Keras, Caffe, ONNX, Scikit-learn, LibSVM, and XGBoost can all be converted. Neural network models can also be optimized for size by using post-training quantization (e.g., to a small bit depth that’s still accurate).


Easing the pressures of new technologies on the Internet

One constant we have witnessed over the history of the Internet is that when underlying technologies improve, the new experiences they enable quickly follow, taking full advantage of the new technology and pushing it to its limits. As more and more devices are able to connect to the Internet at ever higher speeds, including through 5G connectivity, the demand for online content will grow dramatically. Much of this traffic will be video-heavy and delivered in high definition. For example, Analysys Mason predicts that 5G will be a significant enabler of cloud gaming due to the lower latencies and higher speeds it offers. Video delivered at faster frame rates and the need for 360-degree content for the growing use of AR and VR is likely to result in around four times as much traffic as typical video. Another example is streaming of live sports events. The 2019 VIVO Indian Premier League cricket tournament set records for reported online viewership, exceeding the total 2018 viewership within the first three weeks of the 2019 tournament. In fact, the final saw 18.6 million concurrent viewers, an increase of 80% over the previous year – and with 91% watching via mobile.


How Automation is changing the landscape of Enterprises?

Convenience is a great category for this. However, in larger retail environments or when the packaging is less structured, other experiences have limited friction enabled by much less costly technology. For example, in Europe, it is common to see retailers that provide mobile self-scanning solutions or banks of modular self-checkout stands, which allow customers to eliminate the wait time they typically encounter at a traditional checkout. Technology improvements in computer vision have also helped start-ups develop shopping carts that can automatically identify products as they are placed within the carts, creating yet another option. One truth that will remain constant in retail is that customer convenience is a core value proposition, so limiting friction in the buying experience will always have a place in the market. ... The next evolution is to leverage artificial intelligence technologies like machine learning, computer vision, natural language processing, prescriptive analytics, and others to further eliminate the cognitive load on process execution. In the short term, roles focused on repetitive tasks, especially in what has typically termed back-office functions that do not directly interact with shoppers, patients, and customers will be the most impacted by RPA.


What is Intelligent Automation

Just like the machines replacing humans in industries Intelligent Automation solutions have started replacing humans in every industry, freeing their time for more creative and innovative tasks. Areas including Marketing & Sales, Human Resources, Customer support, Finance, IT support, Business Process Management and Operations Excellence are using Intelligent Automation to drive more value. In recent years these emerging technologies have gained substantial momentum. This, in turn, increased the number of technology firms and venture investors shifting their attention towards implementing intelligent automation solutions. Major automakers like Audi, BMW, Mercedes Benz, Volvo and Nissan are planning to introduce autonomous vehicles that use IA. IBM’s Watson processes huge amounts of textual information in order to respond quickly towards complex requests for medical treatment plans. IA is used in commercial processes like a marketing system which avails offers for customers based on their preferences, credit card processing system which helps in detecting fraudulent activities and so on.


Microsoft announces Power BI Teams integration, NLP and per-user Premium subscription

While Microsoft is playing catch-up here with other BI products that already offered narrative summarizations, it has worked hard to integrate its own implementation fully into the Power BI paradigm. The feature is surfaced through a drag-and-drop visual that is contextually updated when the underlying data changes through a filter, a slicer or the cross-filtering that takes place when a data element in another visual is selected. This makes the learning curve negligible for existing Power BI users. And combining smart narratives with "Q&A" natural language query capabilities will make Power BI a now strong contender in the augmented analytics arena. Another major area of enhancement to Power BI's usability comes in the form of a dedicated Power BI add-in application for Microsoft's Teams collaboration platform, released as a preview. The Teams integration includes the ability to browse reports, dashboards and workspaces and directly embed links to them in Teams channel chats. It's not just about linking though, as Teams users can also browse Power BI datasets, both through an alphabetical listing of them or by reviewing a palette of recommended ones. In both cases, datasets previously marked as Certified or Promoted will be identified as such, and Teams users will have the ability view their lineage, generate template-based reports on them, or just analyze their data in Excel.


Adopting interaction analytics to improve contact centre performance

Interaction analytics allows organisation’s to analyse 100% of calls or text-based conversations that come into the contact centre, automatically. By adopting this technology, with the right partner, organisation’s are moving away from relying on an from an inconsistent and subjective sample, to a holistic, consistent and objective view. “Analytics technologies allows organisation’s to take away the manual effort of monitoring contact centre performance and let technology guide everything, from the calls of interest, to the issues of interest, to the opportunities, to the challenges and the complaints. Interaction analytics provides a holistic view of what’s really going on with contact centre interactions,” continued Sherlock. The return on investment is also multiple, both in terms of cost savings (there’s no need for people to manage or monitor every call, as interaction analytics automates the experience) and in identifying new revenue opportunities. “Analytics can help produce sales opportunities and allow organisation’s to collect more revenue by upselling to the customer base — a holistic view of interactions will allow those in sales to see where they’re getting consistent issues being spoken about a particular product or service and then go back to source to modify that product/service,” explained Sherlock.


What Does an Enterprise Architect Do Exactly?

Enterprise architects are responsible for planning how to use and manage all the IT functions of an organization. They must find a way to make them affordable and efficient as possible. It’s up to the enterprise architect to develop the plan. They have a great deal of freedom in deciding what is best. They must balance this against the needs of the organization they work for and its customers. The plan must be the most effective utilization of enterprise architecture possible. It must address any issues the organization currently faces. If there’s a better way to use available resources, it should be included. Each plan an enterprise architect comes up with must align with the goals of the business they work for. Perhaps they want to decrease the time it takes to send and receive information. Switching to a faster server may be an effective strategy. Enterprise architects must also be able to communicate their plans to everyone else. Anyone who doesn’t understand all the steps won’t be able to implement them in their work. Creating a strategy for how to manage an organization’s IT is only the first step. After that, it’s the enterprise architect’s responsibility to implement it.


Enterprise architecture strategy experts offer pandemic tips

Right now, the information needs to be sharper, and it needs to be opinionated. Don't reinvent the wheel. Use the existing artifacts you have. Insert yourself as the epicenter of actionable information and sharpen the insights. You really want to drive the needle and come to the table opinionated. Don't overwhelm your stakeholders with options. Your business model canvases and capability maps are great in EA but far too detailed for the distracted executive of today. So, we're pivoting into executive onboarding dossiers. When new executives come on board, we give them almost a CliffsNotes version, and it saves them hours. Many other examples of your application portfolios can be turned into run books, succession plans and flex workforce plans. The key takeaway is we want to keep EA relevant. It's about adapting to the times, sharpening your narrative with the business and not being afraid to step on some toes. ... Lots of organizations are realizing that business capability models are the most powerful areas they can attack as they struggle with COVID. They're identifying and focusing on the most important capabilities to help them survive through the pandemic and then throwing in a couple of capabilities that differentiate the organization when we come to the other side of COVID.



Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad

Daily Tech Digest - September 22, 2020

How industrial AI will power the refining industry in the future

The ultimate vision for the industry is the self-optimising, autonomous plant – and the increasing deployment of artificial intelligence (AI) across the sector is bringing the reality of this ever closer. However, while refining has been an early adopter of many digital tools, the industry is yet to fully realise the potential of industrial AI. That is, in no small part, because AI and machine learning are too often looked at in isolation, rather than being combined with existing engineering capabilities – models, tools and expertise, to deliver a practical solution that effectively optimises refinery assets. ... Machine learning is used to create the model, leveraging simulation, plant or pilot plant data. The model also uses domain knowledge, including first principles and engineering constraints, to build an enriched model — without requiring the user to have deep process expertise or be an AI expert. The solutions supported by hybrid models act as a bridge between the first principles-focused world of the past and the “smart refinery” environment of the future. They are the essential catalyst helping to enable the self-optimising plant.


Microsoft's new feature uses AI to make video chat less weird

Eye Contact uses the custom artificial intelligence (AI) engine in the Surface Pro X's SQ1 SOC, so you shouldn't see any performance degradation, as much of the complex real-time computational photography is handed off to it and to the integrated GPU. Everything is handled at a device driver level, so it works with any app that uses the front-facing camera -- it doesn't matter if you're using Teams or Skype or Slack or Zoom, they all get the benefit. There's only one constraint: the Surface Pro X must be in landscape mode, as the machine learning model used in Eye Contact won't work if you hold the tablet vertically. In practice that shouldn't be much of an issue, as most video-conferencing apps assume that you're using a standard desktop monitor rather than a tablet PC, and so are optimised for landscape layouts. The question for the future is whether this machine-learning approach can be brought to other devices. Sadly it's unlikely to be a general-purpose solution for some time; it needs to be built into the camera drivers and Microsoft here has the advantage of owning both the camera software and the processor architecture in the Surface Pro X.


Digital transformation: 5 ways the pandemic forced change

Zemmel says that the evolution of the role of the CIO has been accelerated as well. He sees CIOs increasingly reporting to the CEO because they increasingly have a dual mandate. In addition to their historical operational role running the IT department, they now are also customer-facing and driving revenue. That mandate is not new for forward-looking IT organizations, but the pandemic has made other organizations hyper-aware of IT’s role in driving change quickly. CIOs are becoming a sort of “chief influencing officer who is breaking down silos and driving adoption of digital products,” Zemmel adds. Experian’s Libenson puts it this way: “The pandemic has forced us to be closer to the business than before. We had a seat at the table before. But I think we will be a better organization after this.” The various panelists gave nods to the role of technology, especially the use of data; Zemmel describes the second generation of B2B digital selling as “capturing the ‘digital exhaust’ to drive new analytic insights and using data to drive performance and create more immersive experiences.”


Diligent Engine: A Modern Cross-Platform Low-Level Graphics Library

Graphics APIs have come a long way from a small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. The next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard.  ... This article describes Diligent Engine, a light-weight cross-platform graphics API abstraction layer that is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C/C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. The full source code is available for download at GitHub and is free to use.


Supporting mobile workers everywhere

It is amazing how quickly video conferencing has been accepted as part of the daily routine. Such is the success of services like Zoom that CIOs need to reassess priorities. In a workforce where people are working from home regularly, remote access is not limited to a few, but must be available to all. Mobile access and connectivity for the mobile workforce needs to extend to employees’ homes. Traditional VPN access has scalability limitations and is inefficient when used to provide access to modern SaaS-based enterprise applications. To reach all home workers, some organisations are replacing their VPNs with SD-WANs. There is also an opportunity to revisit bring-your-own-device (BYOD) policies. If people have access to computing at home and their devices can be secured, then CIOs should question the need to push out corporate laptops to home workers. While IT departments may have traditionally deployed virtual desktop infrastructure (VDI) to stream business applications to thin client devices, desktop as a service (DaaS) is a natural choice to delivering a managed desktop environment to home workers. For those organisations that are reluctant to use DaaS in the public cloud, as Oxford University Social Sciences Division (OSSD) has found (see below), desktop software can easily be delivered in a secure and manageable way using containers.


Secure data sharing in a world concerned with privacy

Compliance costs and legal risks are prompting companies to consider an innovative data sharing method based on PETs: a new genre of technologies which can help them bridge competing privacy frameworks. PETs are a category of technologies that protect data along its lifecycle while maintaining its utility, even for advanced AI and machine learning processes. PETs allow their users to harness the benefits of big data while protecting personally identifiable information (PII) and other sensitive information, thus maintaining stringent privacy standards. One such PET playing a growing role in privacy-preserving information sharing is Homomorphic Encryption (HE), a technique regarded by many as the holy grail of data protection. HE enables multiple parties to securely collaborate on encrypted data by conducting analysis on data which remains encrypted throughout the process, never exposing personal or confidential information. Through HE, companies can derive the necessary insights from big data while protecting individuals’ personal details – and, crucially, while remaining compliant with privacy legislation because the data is never exposed.



When -- and when not -- to use cloud native security tools

Cloud native security tools like Amazon Inspector and Microsoft Azure Security Center automatically inspect the configuration of common types of cloud workloads and generate alerts when potential security problems are detected. Google Cloud Data Loss Prevention and Amazon Macie provide similar functionality for data by automatically detecting sensitive information that is not properly secured and alerting the user. To protect data even further there are tools, such as Amazon GuardDuty and Azure Advanced Threat Protection, that monitor for events that could signal security issues within cloud-based and on-premises environments. ... IT teams use services like Google Cloud Armor, AWS Web Application Firewall and Azure Firewall to configure firewalls that control network access to applications running in the cloud. Related tools provide mitigation against DDoS attacks that target cloud-based resources. ... Data stored on the major public clouds can be encrypted electively -- or is encrypted automatically by default -- using native functionality built into storage services like Amazon S3 and Azure Blob Storage. Public cloud vendors also offer cloud-based key management services, like Azure Key Vault and Google Key Management Service, for securely keeping track of encryption keys.


Four Case Studies for Implementing Real-Time APIs

Unreliable or slow performance can directly impact or even prevent the adoption of new digital services, making it difficult for a business to maximize the potential of new products and expand its offerings. Thus, it is not only crucial that an API processes calls at acceptable speeds, but it is equally important to have an API infrastructure in place that is able to route traffic to resources correctly, authenticate users, secure APIs, prioritize calls, provide proper bandwidth, and cache API responses.  Most traditional APIM solutions were made to handle traffic between servers in the data center and the client applications accessing those APIs externally (north-south traffic). They also need constant connectivity between the control plane and data plane, which requires using third-party modules, scripts, and local databases. Processing a single request creates significant overhead — and it only gets more complex when dealing with the east-west traffic associated with a distributed application.  Considering that a single transaction or request could require multiple internal API calls, the bank found it extremely difficult to deliver good user experiences to their customers.


Building the foundations of effective data protection compliance

Data protection by design and default needs to be planned within the whole system, depending on the type of data and how much data a business has. Data classification is the categorization of data according to its level of sensitivity or value, using labels. These are attached as visual markings and metadata within the file. When classification is applied the metadata ensures that the data can only be accessed or used in accordance with the rules that correspond with its label. Businesses need to mitigate attacks and employee mistakes by starting with policy - assessing who has access. Then they should select a tool that fits the policy, not the other way round; you should never be faced with selecting a tool and then having to rewrite your policy to fit it. This will then support users with automation and labelling which will enhance the downstream technology. Once data is appropriately classified, security tools such as Data Loss Prevention (DLP), policy-based email encryption, access control and data governance tools are exponentially more effective, as they can access the information provided by the classification label and metadata that tells them how data should be managed and protected.


Q&A on the Book Fail to Learn

People often fear failure because of the stakes associated with it. When we create steep punishment systems and “one-strike-you’re-out” rules, it’s only natural to be terrified of messing up. This is where we need to think more like game designers. Games encourage trial and error because the cost of starting over in a game is practically nothing. If I die playing Halo, I get to respawn and try again immediately. We need to create more “respawn” options in the rest of our lives. This is something that educators can do in their course design. But it’s also something we can encourage as managers, company leaders, or simply as members of society. The best way to do this is to start talking more about our mistakes. These are things we should be able to celebrate, laugh over, shake our collective heads at, and eventually grow from. ... If we go back to people like Dyson and Edison, you see failure-to-success ratios that reach five-thousand or even ten-thousand to one. A venture capitalist who interviewed hundreds of CEOs arrived at the same ratio for start-up companies making it big: about a 10,000:1 failure-to-success ratio. Now, we probably don’t need that many failures in every segment of our lives, but think about how far off most of us are from these numbers.



Quote for the day:

"Leaders need to be optimists. Their vision is beyond the present." -- Rudy Giuliani

Daily Tech Digest - September 21, 2020

Ransomware attacks growing in number, severity: Why experts believe it will get worse

The attackers are also changing their tactics, making it difficult to defend against them. “Those tactics include things like not just encrypting victims’ computer networks but also exfiltrating data on those victim networks,” Holmes said. “Now when victims realize they’ve become a victim of a ransomware attack; it’s not just that their data is encrypted. It’s also that they’ve lost all their — or much of their — information. The ransomware actors are holding that data at risk and telling victims if they don’t pay the ransom, they’re going to leak that data to the public.” Perhaps not surprisingly, ransomware attackers are starting to band together, Holmes said. “We’ve seen some of these ransomware actors entering into a cartel with one another. Under this cartel model, they’ll share information amongst the group members and share intelligence and share techniques. It’s a bit concerning to us as well [because] it shows a major shift change among the ransomware actors.” The ransomware cartels are forming into business-like organizations with specialization of labor, Jason Conboy of DHS’s Investigations division said. “You have your malware office… and they have effectively built skills to write that malware. You’ve got your money mules.


Cloud banking: More than just a CIO conversation

The bank of 2030 will look very different from today. Facing changing consumer expectations, emerging technologies, and alternative business models, banks need to start putting strategies in place now to help them prepare for this future. An important indicator of the shifting landscape? Cloud computing is moving to the forefront as a focus for the chief information officer, C-suite executives, and board members. Banking and capital markets leaders increasingly recognize that cloud is more than a technology; it is a destination for banks and other financial services firms to store data and applications and access advanced software applications via the internet. The leading public cloud providers offer an array of innovative products-as-a-service that can be accessed on their platforms and help banks implement business and operating models to improve revenue generation, increase customer insights, contain costs, deliver market-relevant products quickly and efficiently, and help monetize enterprise data assets. The cloud also offers a huge opportunity to synchronize the enterprise; to break down operational and data silos across risk, finance, regulatory, customer support, and more. Once massive data sets are combined in one place, the organization can apply advanced analytics for integrated insights.


Machine Learning Biases Might Define Minority Health Outcomes

Algorithmic biases present marginalized patients as more healthy, disqualifying them from the speciality care they need. ... AI bias issues are in no way new to the computer science community; however, it is unclear whether the medical community even recognizes the problem. Algorithmic-driven patient care results in less adequate care for people striving to push back against the tide of systemic racism, given that the lack of diversity in big data and automated-decision regulations has not been ameliorated by the United States federal government. In contrast, European data regulations state that machine-automation developers must use appropriate mathematical and statistical techniques to ensure both that risk of error is minimized and that discriminatory effects are prevented. Regardless of geography, it is undeniable that managing these prejudices requires careful attention to data, the use of artificial intelligence to help detect bias, and the building of diverse teams. The federal government must ensure that those behind the automation are ethically and legally obliged to ensure AI is on the side of fairness.


Ireland’s data watchdog slammed for letting adtech carry on ‘biggest breach of all time’

Responding to the ICCL submission, the DPC’s deputy commissioner Graham Doyle sent this statement: “Extensive recent updates and correspondence on this matter, including a meeting, have been provided by the DPC. The investigation has progressed and a full update on the next steps provided to the concerned party.” However in a follow up to Doyle’s remarks, Ryan told TechCrunch he has “no idea” what the DPC is referring to when it mentions a “full update”. On “next steps” he said the regulator informed him it will produce a document setting out what it believes the issues are — within four weeks of its letter, dated September 15. Ryan expressed particular concern that the DPC’s enquiry does not appear to cover security — which is the crux of the RTB complaints, since GDPR’s security principle puts an obligation on processors to ensure data is handled securely and protected against unauthorized processing or loss. ... The regulator also pointed to another related open enquiry — into adtech veteran Quantcast, also beginning in May 2019.The DPC has said the Quantcast enquiry is examining the lawful basis claimed for processing Internet users’ data for ad targeting purposes, as well as considering whether transparency and data retention obligations are being fulfilled.


Write Better Automated Tests With Cypress in Angular

Cypress is much like Protractor for Angular applications, but Cypress is much faster to run and easier to debug. Cypress not only is good at the automated end-to-end testing that is independent of our applications, but also is capable of unit testing against methods in model classes, service classes, and so on. Cypress provides a complete end-to-end testing experience in your browser where you can see your tests in an automated way. Cypress provides it's own test runner where you can do the tests locally. Cypress provides some other cool features like Time Travel, Debuggability, Real-time reloads, Automatic waiting. These brilliant features make apart this tool and we will see these in action. Enough talk. If you've already cloned this repo then you're good to go but if not then you probably have an Angular project and you just have to add Cypress as a dependency. ... The cypress package includes a desktop app and the Cypress binary. It may take a few minutes to run this command for the first time because the package installation needs to download the Cypress binary. The Cypress binary is saved in a global cache directory, so installing this package in the future will be much faster for the same version.


Connecting the dots in an uncertain world

A tendency to underestimate the unexpected — both the chance of it happening and the potential in it — is a common one. To combat this bias, instead of doing “risk management,” which tends to focus on known unknowns, you can build a muscle for planning for and finding potential in the unexpected. This will be what counts during periods of crisis and uncertainty. One way of doing this is to develop a North Star you can use to connect unexpected events to your mission. Take Kaan Terzioglu, CEO of Turkish telecom company Turkcell from 2015 to 2019. When there was an outbreak of serious violence in Turkey in 2016 following an attempted coup d’etat, Kaan and his team responded by making the Internet free for a month so people were able to communicate with those they cared about without worrying about bills. And during a regional refugee crisis, the company’s quick and proactive actions to develop an app to help Syrian refugees find crucial health and financial information in the country calmed tensions in the camps. As Turkcell has built on this initial app, adding more services, customer loyalty among the refugee population has given Turkcell a major market share among the 3 million Syrian refugees in Turkey. 


What advances in HPC and AI mean for business, and what they do not

What businesses do need to be mindful of when looking to deploy AI is that it is not a magic bullet to answering business critical questions. What’s more, and as we’ve seen in several recent unfortunate examples, is if your AI is not trained with a wide set of data, it can end up amplifying a wrong supposition, rendering the end-product useless. For example, if you’re only training your facial recognition programme with pictures of white men, you’re going to get some biased and potentially discriminatory outcomes. As with all forms of modelling, the findings you get out are only as good as the data that you put into it. There are no shortcuts around this simple fact and any organisation trying to do so will quickly become unstuck. The key take-away from these missteps is not that AI has failed or is incapable of carrying out a task; it’s that humans have, can and often do apply the technology poorly. Today, AI is transforming almost every industry and vertical, from pathology, where it’s used to detect cancer, to AI picking up the phone to answer contact centre customer queries, from water controlled indoor farming to driving autonomous vehicles in warehouses and on public roads.


Interview with Event Modeling Founder - Adam Dymitruk

The event model contains independent units of work (slices). This allows not requiring stand-ups, sprints and many other processes that are not effective with remote work. These saved hours are reinvested in working on implementing the solution. The transition to remote work is not even felt with the guidance event modeling brings. Because the patterns are prescribed, event modeling takes the guesswork out of many aspects of writing software. The design allows a fixed and flat cost curve where the complexity of the system doesn't make a project end in a "death march". Event Modeling is simple. Whether you're looking at BPMN or UML, the traditional practices for design are incredibly heavy. The main benefit is that you can get the same value for a much smaller investment of time to create a blueprint for how your current or potential solution works or will work. The goal is to sweep aside the mountain of books and get on organization going quickly. It takes 15 minutes to explain Event Modeling. The other main benefit is having a reliable plan to start implementation. Most new projects in software today head into starting development blind. An important test that the event model provides is that it shows if you are "information complete".


The 4 Top Artificial Intelligence Trends For 2021

From advancement in medical technology and standards of care, to advances in communication technology that enabled outbreaks to be spotted more quickly and lockdowns imposed. Over the next year, AI will be added to the list of technological developments that are enabling us to more effectively deal with pandemics. The growth in the amount of scientific and medical literature alone is enormous, with more than 28,000 papers published by April this year relating to Covid-19. A dedicated search engine powered by natural language processing (NLP) algorithms has already been made available, so anyone can get AI assistance when investigating this massive dataset. Work is also ongoing to develop AI solutions to help deal with the huge backlog of other medical issues, such as cancer, where treatment has been affected as resources are diverted to fight Covid-19. Over the next year, we are likely to see the accelerated adoption of AI across many other areas of healthcare, not only related to tackling viruses. By developing our ability to apply machine learning problem-solving to these massive, real-time global datasets, we will spot outbreaks more easily, track contact between infected people, enable more accurate diagnoses, and, by predicting ways that a virus might evolve in the future, develop more effective and lasting vaccinations.


New US digital identity legislation promises more secure verification

The Improving Digital Identity Act would create an Improving Digital Identity Task Force within the executive office of the president. Its mission is to establish a government-wide effort to develop secure methods for federal, state and local government agencies to validate identity attributes and support interoperable digital identity verification in both the public and private sectors. The task force would be comprised of cabinet secretaries, heads of other federal agencies, state and local government officials, congressional committee designated members, and a position appointed by the president. Additionally, the National Institute of Standards and Technology (NIST) would develop a standards framework for digital identity verification to guide federal, state and local governments in selecting their digital identity solutions. NIST would have one year to publish a final version of the framework. The legislation requires the task force to publish a report with recommendations on research and development in systems that enable digital identity verification. Upon its completion and with consent of the individual, the framework will enable government agencies to securely vouch for its citizens in real-time when online.



Quote for the day:

"Power should be reserved for weightlifting and boats, and leadership really involves responsibility." -- Herb Kelleher

Daily Tech Digest - September 20, 2020

Why disaster recovery preparation is even more important during a pandemic

From a cyber perspective, disaster recovery during a pandemic raises new challenges as well. The rapid expansion of remote work introduces new vulnerabilities. Many organizations have relaxed perimeter security controls to allow remote connectivity, introducing new threat vectors that threat actors can exploit to gain access to networks. Lately, many of these attacks have focused on ransomware and data destruction, which encrypt data and often corrupt critical backup systems, rendering existing disaster recovery plans unusable. An "all hands on deck" approach to manual recovery is often the only response to these conditions. Unfortunately, social distancing protocols and remote work arrangements can make those manual recovery efforts an impossibility. ... IT disaster recovery generally falls into one of two categories: A natural disaster event (earthquake, flood, etc.) or a system failure (such as failures in hardware, software or electrical). This year, actual DR responses we have witnessed have included issues with local or regional power outages, or power infrastructure issues. We have seen this across multiple industries including financial services with outages during peak customer windows and prolonged recovery times.


Iranian hacker group developed Android malware to steal 2FA SMS codes

In a report published today, Check Point researchers said they also discovered a potent Android backdoor developed by the group. The backdoor could steal the victim's contacts list and SMS messages, silently record the victim via the microphone, and show phishing pages. But the backdoor also contained routines that were specifically focused on stealing 2FA codes. Check Point said the malware would intercept and forward to the attackers any SMS message that contained the "G-" string, usually employed to prefix 2FA codes for Google accounts sent to users via SMS. The thinking is that Rampant Kitten operators would use the Android trojan to show a Google phishing page, capture the user's account credentials, and then access the victim's account. If the victim had 2FA enabled, the malware's 2FA SMS-intercepting functionality would silently send copies of the 2FA SMS code to the attackers, allowing them to bypass 2FA. But that was not it. Check Point also found evidence that the malware would also automatically forwarding all incoming SMS messages from Telegram and other social network apps. These types of messages also contain 2FA codes, and it's very likely that the group was using this functionality to bypass 2FA on more than Google accounts.


Clean Coders: Why India isn’t on the List

A more vexing element that drives the problem — a majority of the Indian software companies look at software purely as a business. It’s mostly about getting the deliverables ready on the quoted time and almost never about striving for quality results. Consequently, the team treats coding as a task to be ticked off with numbers rather than a task requiring quality — something that would actually educate folks to avoid future mistakes. It’s a chain reaction, really. When the organization itself does not prioritize clean quality coding when a product is being developed, most coders lose the urge to be curious about better practices and approaches since they have to direct all their efforts into meeting deadlines. Even to this day, many skilled professionals in the industry lack the ability to convey their ideas and pain points effectively during client meetings or within the team. Organizations need to establish the fact that coding is only one aspect of the job and that communication is equally important. Especially in the service sector, when we are constantly collaborating on large-scale projects, it’s absolutely crucial for clients and internal teams to be on the same page.


6 big data blunders businesses should avoid

Owing to modern technologies, all trades, irrespective of size, have access to granular and rich data that is based on their operations and clients. The major hurdle in this is dealing with a massive quantity of data that are both challenging to maintain and costly to manage. Despite the presence of appropriate tools, dealing with such data is a cumbersome activity. Errors are a frequent presence with the layers of complexity involved in dealing with Big Data. However, Big Data holds diverse leverages for businesses. ... Thus, Big Data becomes the defining leverage for innovative enterprises to gain an edge over their competitors. The usage of these data is sure to exceed 274.3 billion by 2022 globally with each individual generating approximately 1.7 megabytes of information per second.  With such leverages in point, can you really afford to make mistakes with regard to blunders regarding Big Data? So, here are some big data blunders that businesses need to avoid to harness its full capabilities and enjoy the leverages that it brings.


Tracking global cybercrime activity and the impact on the digital economy

The EMEA region saw lower overall attack rates in comparison to most other global regions from January through June 2020. This is due to a high volume of trusted login transactions across relatively mature mobile apps. The attack patterns in EMEA were also more benign and had less volatility and fewer spikes in attack rates. However, there are some notable exceptions. Desktop transactions conducted from EMEA had a higher attack rate than the global average and automated bot attack volume grew 45% year over year. The UK originates the highest volume of human-initiated cyberattacks in EMEA, with Germany and France second and third in the region. The UK is also the second largest contributor to global bot attacks behind the U.S. One example of a UK banking fraud network saw more than $17 million exposed to fraud across 10 financial services organizations. This network alone consisted of 7,800 devices, 5,200 email addresses and 1,000 telephone numbers. The overall human-initiated attack rate fell through the first half of 2020, showing a 33% decline year over year. The breakdown by sector shows a 23% decline in financial services and a 55% decline in e-commerce attack rates.


Load Testing APIs and Websites with Gatling: It’s Never Too Late to Get Started

If it is your first time load testing, whether you already know the target user behavior or not, you should start with a capacity test. Stress testing is useful but analyzing the metrics is really tricky under such a load. Since everything is failing at the same time, it makes the task difficult, even impossible. Capacity testing offers the luxury to go slowly to failure, which is more comfortable for the first analysis. To get started, just run a capacity test that makes your application crash as soon as possible. You only need to add complexity to the scenario when everything seems to run smoothly. ... If an average can give you a quick overview of what happened in a run, it will hide under the rug all the things you actually want to look at. This is where percentiles come in handy. Think of it this way: if the average response time is some amount of milliseconds, how does the experience feel in the worst case for 1% of your user base? Better or worse? How does it feel for 0.1% of your users? And so on, getting closer and closer to zero. The higher the amount of users and requests, the closer you’ll need to get to zero in order to study extreme behaviors. 


Who Should Own Data Governance?

Many organizations position data governance under the Chief Financial Officer (CFO). Other organizations position data governance under the Chief Risk Officer (CRO) or the Chief Operational Officer (COO). In addition, some organizations position data governance under the Chief Privacy Officer (CPO) or the Chief Information Security Officer (CISO). These days there are so many C-levels. Placement of data governance under any one of these C-level people is never wrong. Data governance must reside somewhere and having a C-level person as your Executive Sponsor is always a good thing. In fact, many organizations state that senior leadership’s support, sponsorship and understanding of data governance is the number one best practice for starting and sustaining their program. Having a C-level person as your Executive Sponsor often dictates where data governance will reside in the organization. Is it better for data governance to be placed in Finance and report through the CFO than it is to have it reside in Operations and report through the COO? The answer to that question is, “It depends.” It depends on the interest and ability of that person and that part of the organization to provide for the proper level of capacity in terms of resources to operationalize and engage the organization.


Why Are Some Cybersecurity Professionals Not Finding Jobs?

Simply stated, these good people cannot get hired in a cyber job. Going much further, they argue that select organizations (who discuss millions of unfilled jobs) are pushing their own training agendas, certifications offered, want to boost certain company stock prices or have other reasons to encourage this “abundance of cyber jobs remain vacant” narrative, even though it is not true, in their opinion. I want to be clear up front that, I disagree with this narrative. I do believe that many (perhaps millions but we can argue the numbers in another blog) global cybersecurity job vacancies do exist. Nevertheless, I truly sympathize with these people who disagree, and I want to try and help as many as I can find employment. I also want hiring managers to set proper expectations.  In addition to my blogs and articles, I have personally mentored and helped dozens of people find cyber jobs, from high school students to new college graduates to CISOs and CSOs. (Note: this is not my "day job" but one way I try to give back to the security community – just like so many others are doing as well.) I also champion ways that government CISOs struggle in this area, and how tech leaders can find more cyber talent.


Seven Steps to Realizing the Value of Modern Apps

With organizations running a multitude of environments to meet the demands of their applications, each with unique technological requirements, finding the platform isn’t the only challenge. What’s hard is that the development and management is more complex than ever before, with IT and developers navigating traditional apps, cloud- native, SaaS, services and on-prem, for example. Here’s where you need a common ground between IT teams, Lines of Business and developers – where having a single digital platform is critical – to remove the potential for silos springing up, enable the better deployment of resources, and provide a consistent approach to managing applications, infrastructure and business needs together. It’s about creating one, common platform to ‘run all the things’. One, software-defined digital foundation that provides the platform – and choice of where to run IT – to drive business value, create the best environment for developers and help IT effectively manage existing and new technology via any cloud for any application on any device with intrinsic security. One platform that can deliver all apps, enabling developers to use the latest development methodologies and container technologies for faster time to production. 


4 measures to counteract risk in financial services

Financial services regulators across jurisdictions have identified concentration risk as a factor to consider in assessing risk in outsourcing. That risk has two components (i) micro-risk where reliance on a single provider for core operations may present an undue risk of operations if there is a single point of failure and (ii) macro-risk where reliance on financial firms within the ecosystem are so reliant on a vendor that a single point of failure risks causing a broad systemic risk to the operations of the financial services sector. Notably, this risk is not unique to cloud services and, as the Bank of England commented in its Consultation Paper on Outsourcing and Third Party Risk Management, “a small number of third parties have traditionally dominated the provision of certain functions, products, or services to firms, such as cash machines or IT mainframes.” In other words, the issue of concentration risk is not net new but has been a feature within the financial services industry for decades. While cloud remains relatively nascent compared to entrenched providers of legacy systems, like the mainframe, its increasing adoption means that financial institutions must account for, and mitigate against, micro-risk issues of concentration in use of cloud providers.



Quote for the day:

“When we are no longer able to change a situation, we are challenged to change ourselves.” -- Viktor E. Frankl