Daily Tech Digest - November 08, 2020

How Emerging Demands Of AI-Powered Solutions Help Gain Momentum Of Businesses

AI helps take the BI game leaps and bounds ahead with machine learning and deep learning. It empowers BI with the ability to analyze data coming from multiple sources, learn from this data in real-time, and provide accurate granular predictive insights for faster business growth. AI always stays one step ahead of humans in terms of analyzing large data sets at scale with speed and accuracy. The influence of AI is simply not limited to analytics but also to data engineering. Data coming from multiple structured, unstructured, and semi-structured sources, needs to be transformed from silos to unified data. AI can accelerate and automate this process creating a single view and saving data analyst's time and providing much-needed independence for business users. AI-powered NLP bots take BI altogether to the next level by enabling users to extract insights via voice or chat using any language. For example, these BI bots can easily answer questions like 'What is the sales forecast for the next two quarters?' With this, business users can skip any complex query and leave it up to the bots to process the analysis.


How to deal with the escalating phishing threat

“Working from home, where there are more distractions, makes it even less likely that people really pay attention to these trainings. That’s why it’s not uncommon to see the same people who tune out training falling for scams again and again,” he noted. That’s why defenders must preempt attacks, he says, and reinforce a lesson during a live attack. When something gets through and someone clicks on a malicious URL, defenders must be able to simultaneously block the attack and show the victim what the phisher was attempting to do. Harr, who has over 20 years of experience as a senior executive and GM at industry leading security and storage companies and as a serial entrepreneur and CEO at multiple successful start-ups, is now leading SlashNext, a cybersecurity startup that uses AI to predict and protect enterprise users from phishing threats. He says that most CISOs assume phishing is a corporate email problem and their current line of defense is adequate, but they are wrong. “We are detecting 21,000 new phishing attacks a day, many of which have moved beyond corporate email and simple credential stealing. These attacks can easily evade email phishing defenses that rely on static, reputation-based detection. 


The cryptocurrency sector is overflowing with dead projects

It’s a good question if the world really needs a blockchain-based information and trading platform for the pet market. I wouldn’t say there are many problems with over-centralization there. Pet shops are usually chosen by customers after analyzing brand reputation and online presence. Some problems that customers on this market may face include unreliable information about the acquired animal’s health or previous owners. However, these difficulties comprise not a technical, but a legal problem that is unlikely to be solved using blockchain technology. Moreover, since animal welfare laws vary between different countries, creating a unified international platform in this field is a legally challenging task, hardly suitable for a small technological startup. The Petchain project team consisted mainly of no-names who had no proven experience in any serious projects. It was not even possible to say for sure whether these were real people — some of the project advisors turned out to have been presented with fake photos. Despite some marketing efforts, no serious funding was attracted to the project. At the moment, the official website of the project is inactive and its social media accounts haven’t been updated for more than a year.


The Road to MicroProfile 4.0

The primary driver behind creating the MicroProfile Working Group is to close intellectual property gaps identified by the Eclipse Foundation for specification projects. So, there are more legal protections in place now that MicroProfile is a Working Group. A Working Group also places more processes on MicroProfile. Historically, MicroProfile moved quickly with minimal process and late-binding decisions. It was quite an agile project that delivered specifications at quite a quick pace. However, I personally feel like we were reaching a point where adding *some* process can benefit the project. For instance, we now have to put more thought and formality up-front into planning a specification, which requires a Steering Committee vote. Better planning gives implementors, tool vendors, and the community more up-front visibility into what is coming and prepare. However, we codified "limited processes" in the MicroProfile Charter to keep processes to a minimum. ... A big challenge was switching from being a fast-moving agile project to fitting into the process structure required by a Working Group. We wanted to maintain as much of our existing culture as possible because the community was consistently delivering three annual releases.


Blockchain adoption 2021: Going mainstream through enterprise use

According to Bennet, many of the blockchain-based systems that are live today share a common factor: less time involved to resolve discrepancies. In some cases, this could even be instant. Bennet noted this common factor applies to supply chain use cases as well as in financial services: “It’s not just about needing fewer people to accomplish certain tasks; it’s also about shortening elapsed time and freeing up liquidity. A key point is that it’s possible to make it happen today, in the context of existing processes and operating models.” While this may be, Bennet shared that the more long-term strategic projects in financial services tend to revolve around potential changes in market structure and operating models. Many of these cases also require regulatory adjustments. “This takes time, resource and effort. That’s the main reason why COVID-related volatility and uncertainty has led many banks to pull back from some of those more long-term DLT-related projects for the time being,” Bennet said. The report also states that almost all the initiatives set to go from pilot into production next year will run on enterprise blockchain platforms that utilize the cloud. These most likely will include solutions from Alibaba, Huawei, IBM, Microsoft, OneConnect and Oracle.


How open source makes me a better manager

As an open source enthusiast, it was easy for me to transition my management style to the open management philosophy, which fosters transparency, inclusivity, adaptability, and collaboration. As an open manager, one of my primary goals is to engage and empower associates to be their best. It is easy to adopt this philosophy when you understand the open source values. By being transparent, I help create the context for the team and the "why." This is a building block in creating trust. Being consciously inclusive is another value that I regard highly. Making sure everyone in the team is included and everyone's voice is heard is extremely important for individual and organizational growth. In an environment that is constantly evolving and where innovation is key, being nimble and adaptable is of utmost importance. Encouraging associates' growth mindset and continuous learning helps foster these traits. For effective collaboration, I believe we need an environment where there is trust, open communication, and respect. By paying attention to these values, an open manager can create an environment that is inclusive, treats others with respect, and encourages everyone to support each other.


Marriott Hit With $24 Million GDPR Privacy Fine Over Breach

One notable aspect about the fine imposed on Marriott is that it is just one-fifth of the fine that the ICO originally recommended in July 2019, which Marriott had contested. But the reduction is not nearly as big as with the final fine that the ICO recently imposed on British Airways, in connection with a 2018 data breach that exposed the personal information of about 430,000 customers, with 244,000 possibly having their names, addresses, payment card numbers and CVVs compromised. In its initial July 2019 penalty notice, the ICO had proposed fining BA a record £184 million ($238 million). But last month, the regulator issued a final fine of just £20 million ($26 million). Legal experts say the final fines being lower than the proposed penalties is not surprising. Indeed, the ICO earlier this year noted that because of the ongoing coronavirus outbreak, it planned to adjust its regulatory approach, not least because of the staffing and financial impact that COVID-19 was having on organizations. Under GDPR, after proposing a fine, regulators have 12 months to issue a final fine, unless it proposes delaying the imposition of the fine, and the organization that is being investigated agrees.


What Is The Value Proposition of Enterprise Architecture Today?

The first one of these is Strategy Advancement. This is basically concerned with how the business can achieve its target outcomes and also identifying the means to do so. So what are we trying to achieve – do we know, do we have doubts about that? If we’re sure our goals are solid, then how do we make them happen, how do we ensure that every investment, or strategic decision, or new business process we set up is inline and actively supports achieving those goals. EA connects all these concepts across the different enterprise domains beautifully and when done in a leading platform like HoriZZon, the quality of the business intelligence insights that can be produced and delivered to the relevant audiences, in order to make sure everyone’s eyes remain on the prize, is invaluable. So strategy advancement is key in ensuring coordinated change across the entire business. The second one of these areas is Risk Identification & Mitigation. Security and the risks to personal data have never been more relevant than now. This area of enterprise architecture’s value proposition deals with identifying the risks faced by the organization in a way that allows architects to engage in a meaningful conversation with stakeholders on the business side about how we can address these risks.


Three Intelligent Automation Capabilities to Look for When Evaluating RPA Tools

Making decisions based on rules only works when outcomes are predictable. What happens when outcomes are less certain and conditions more varied — conditions under which people have to make decisions all the time? For instance, how would a bot respond to a query like Is the supplier reliable? Choosing an answer like Extremely Reliable, Very Reliable, Sometimes Reliable, or Not Reliable requires an element of human reasoning. Bots can achieve this by applying an AI technique called fuzzy logic. Fuzzy logic uses mathematical models defined by the RPA developer to represent variations and uncertainty. For example, on a scale from 1 to 10, an Extremely Reliable supplier may be rated between 7 and 10, whereas a Very Reliable supplier may be rated between 6 and 8. The bot uses these mathematical models to convert precise input data into fuzzy input values. The bot then applies business rules defined by the RPA developer to the fuzzy input values. The mathematical model is then applied to the fuzzy output values to generate the result. ... As the amount of digital work increases, RPA solutions need scalability to provide greater performance capacity. Most RPA vendors solve this problem by enabling customers to add more bots to scale capacity horizontally. 


How Data Gravity Is Forcing a Shift to a Data-Centric Enterprise Architecture

The new demands brought on by AI and ML create new opportunities for data-centric architecture that supports businesses and their need to operate ubiquitously so they can meet customer expectations and make business decisions on-demand. It’s informed by real-time intelligence to power innovation and scale digital business. ... With a modernized infrastructure strategy, enterprises can support the influx of data from several users, locations, clouds, and networks and create centers of data exchange. Traffic can then be aggregated and maintained via public or private clouds, at the core or the edge, and from every point of business presence, helping lessen data gravity barriers and its effects. By implementing a secure, hybrid IT and data-centric architecture globally at key points of business presence, businesses can harness data to create centers of data exchange for better digital decision-making. Data gravity impacts businesses of all sizes, and every industry has unique requirements around addressing data gravity. In order for the industry to tackle the next era of compute, companies including data center, cloud, and HPC solution providers, are coming together to help mitigate the challenges associated with data gravity by creating an ecosystem of partners so that enterprises can solve their global coverage, capacity, and ecosystem connectivity.



Quote for the day:

"The greatest thing is, at any moment, to be willing to give up who we are in order to become all that we can be." -- Max de Pree

Daily Tech Digest - November 07, 2020

Why Culture Is the Greatest Barrier to Data Success

Achieving data success is a journey, not a sprint. Companies desire to accelerate their efforts to become data-driven, but consistency, patience, and steadfastness pay off in the long run. Companies that set a clear course, with reasonable expectations and phased results over a period of time, get to the destination faster. Develop a plan. Create a data strategy for your company if you do not already have one. If you do have a data strategy, make sure that it is updated annually to reflect changes in the business and the ongoing and rapid evolution of emerging data management capabilities. Define your future state, and build an execution road map that will take you from your current state to the target outcome. It is hard to reach any destination without a good road map. Companies need to maintain a long-term view and stick to it while making periodic adjustments. Patience, persistence, and commitment are the ingredients for ensuring a successful long-term outcome. Organizations must evolve and change the ways in which they structure current business processes if they expect to become more data-driven. In short, companies must be prepared to think differently.


Silver Peak SD-WAN Collects Aruba's ClearPass Treatment

According to Lunetta, ClearPass was a natural place to start the integration efforts. “Security has always been central to Aruba’s network solutions and is top of mind for every customer these days, especially with the increase of remote working and proliferation of IoT devices on the network,” he said. Aruba’s ClearPass offering was announced in April 2019, to help enterprises cope with the growing number of IoT and connected devices on the network. ClearPass device insights is a terminal that employs machine learning to automate the discovery and fingerprinting of connected devices. When paired with Aruba’s ClearPass Policy Manager, customers can dynamically segment security capabilities, making it possible to authenticate and enforce policies based on device type and the needs of the user. Silver Peak customers will be able to identify and block unauthorized users from access applicants or other services at the WAN edge long before they get to the cloud or private data center. “I think the biggest benefit will be adding more intelligence to the segmentation capabilities from Silver Peak,” said John Grady, network security analyst at ESG, in an email to SDxCentral. “By adding agentless device visibility and context, as well as the automation and policy control from ClearPass, SilverPeak becomes that much more attractive, especially relative to IoT.”


Ransomware Alert: Pay2Key

Over the past week, an exceptional number of Israeli companies reported ransomware attacks. While some of the attacks were carried out by known ransomware strands like REvil and Ryuk, several large corporations experienced a full blown attack with a previously unknown ransomware variant names Pay2Key. As days go by, more of the reported ransomware attacks turn out to be related to the new Pay2Key ransomware. The attacker followed the same procedure to gain a foothold, propagate and remotely control the infection within the compromised companies. The investigation so far indicates the attacker may have gained access to the organizations’ networks some time before the attack, but presented an ability to make a rapid move of spreading the ransomware within an hour to the entire network. After completing the infection phase, the victims received a customized ransom note, with a relatively low demand of 7-9 bitcoins (~$110K-$140K). The full scope of these attacks is still unraveling and is under investigation; but we at Check Point Research would like to offer our initial analysis of this new ransomware variant, as well as to provide relevant IOC’s to help mitigate possible ongoing attacks. ... Analyzing Pay2Key ransomware operation, we were unable to correlate it to any other existing ransomware strain, and it appears to be developed from scratch.


Blazor: Full stack C# and Microsoft's pitch for ASP.NET Web Form diehards

Blazor is not very like web forms but has some things in common. One is that developers can write C# everywhere, both on the server and for the browser client. Microsoft calls this “full stack C#”. “Blazor shares many commonalities with ASP.NET Web Forms, like having a reusable component model and a simple way to handle user events,” wrote the authors. The Blazor framework comes in several guises. The initial concept, and one of the options, is Blazor WebAssembly (Wasm). The .NET runtime is complied to Wasm, the application is compiled to a .NET DLL, and runs in the browser, supplemented by JavaScript interop. ... Blazor is designed for single-page applications and is reminiscent of Silverlight – Microsoft’s browser plugin in which ran .NET code in the browser - but with an HTML/CSS user interface. There are two other Blazor application models. Blazor Server runs on the server and supports a thin browser client communicating with WebSockets (ASP.NET SignalR). The programming model is the same, but it is a thin client approach which means faster loading and no WebAssembly required; it can even be persuaded to run in IE11.


What is data architecture? A framework for managing data

According to Data Management Book of Knowledge (DMBOK 2), data architecture defines the blueprint for managing data assets by aligning with organizational strategy to establish strategic data requirements and designs to meet those requirements. On the other hand, DMBOK 2 defines data modeling as, "the process of discovering, analyzing, representing, and communicating data requirements in a precise form called the data model." While both data architecture and data modeling seek to bridge the gap between business goals and technology, data architecture is about the macro view that seeks to understand and support the relationships between an organization's functions, technology, and data types. Data modeling takes a more focused view of specific systems or business cases. There are several enterprise architecture frameworks that commonly serve as the foundation for building an organization's data architecture framework. DAMA International's Data Management Body of Knowledge is a framework specifically for data management. It provides standard definitions for data management functions, deliverables, roles, and other terminology, and presents guiding principles for data management.


Using machine learning to track the pandemic’s impact on mental health

Using several types of natural language processing algorithms, the researchers measured the frequency of words associated with topics such as anxiety, death, isolation, and substance abuse, and grouped posts together based on similarities in the language used. These approaches allowed the researchers to identify similarities between each group’s posts after the onset of the pandemic, as well as distinctive differences between groups. The researchers found that while people in most of the support groups began posting about Covid-19 in March, the group devoted to health anxiety started much earlier, in January. However, as the pandemic progressed, the other mental health groups began to closely resemble the health anxiety group, in terms of the language that was most often used. At the same time, the group devoted to personal finance showed the most negative semantic change from January to April 2020, and significantly increased the use of words related to economic stress and negative sentiment. They also discovered that the mental health groups affected the most negatively early in the pandemic were those related to ADHD and eating disorders.


‘Digital Mercenaries’: Why Blockchain Analytics Firms Have Privacy Advocates Worried

Gladstein and other advocates see this sort of blockchain analysis as an extension of governmental surveillance, along the lines of when the National Security Agency (NSA) was secretly gathering extensive metadata on the American public, not to mention the agency’s work abroad. Gladstein argues that when it comes to payment processors like Square and even exchanges, they can make a case they work hard to protect customer privacy. But if you start a blockchain surveillance company (as companies such as Chainalysis, CipherTrace and Elliptic have done), that’s not a defense because the explicit purpose of the company is to participate in the de-anonymization process.  De-anonymization is a process that has different components, one being the use of the blockchain to trace where funds go.  “Natively speaking, Bitcoin is very privacy-protecting because it’s not linked to your identity or your home address or your credit card history,” said Gladstein. “It’s just a freaking random address, right? And the coins are moved from one address to another. To pair these to a person and destroy their privacy requires intentional or unintentional doxxing.”


Kubernetes Security Best Practices to Keep You out of the News

Building secure containers requires scanning them for vulnerabilities — including Linux system packages, as well as application packages for dynamic languages like Python or Ruby. App developers might be accustomed to scanning application dependencies, but now that they are shipping an entire operating system with their app, they have to be supported in securing the OS as well. To support this effort at scale, consider using a tool like Cloud Native Buildpacks, which allows a platform or ops team to make standardized container builds that developers can use to drop their application into — completely replacing the Dockerfile for a project. These centralized builds can be kept up-to-date so that developers can focus on what they’re good at rather than having to be jacks-of-all-DevOps-trades. Container image scanning tools scan the layers of a built image for known vulnerabilities, and are indispensable in keeping your builds and dependencies up-to-date. They can be run during development and in CI pipelines to shift security practices left, giving developers the earliest notice of a vulnerability. The best practice is to strip your container down to the minimum needed to run the application. A great way to ruin an attacker’s day is to have a container with no shell!


Gitpaste-12 Worm Targets Linux Servers, IoT Devices

This script sets up a cron job it downloads from Pastebin. A cron job is a time-based job scheduler in Unix-like computer operating systems. The cron job calls a script and executes it again each minute; researchers believe that this script is presumably one mechanism by which updates can be pushed to the botnet. It then downloads a script from GitHub (https://raw[.]githubusercontent[.]com/cnmnmsl-001/-/master/shadu1) and executes it. The script contains comments in the Chinese language and has multiple commands available to attackers to disable different security capabilities. These include stripping the system’s defenses, including firewall rules, selinux (a security architecture for LinuxR systems), apparmor (a Linux kernel security module that allows the system administrator to restrict programs’ capabilities), as well as common attack prevention and monitoring software. The malware also has some commands that disable cloud security agents, “which clearly indicates the threat actor intends to target public cloud computing infrastructure provided by Alibaba Cloud and Tencent,” said researchers. Gitpaste-12 also features commands allowing it to run a cryptominer that targets the Monero cryptocurrency.


Data Strategies for Efficient and Secure Edge Computing Services

There is a long list of design questions that comes with executing an IoT network: where does computation happen? Where and how do you store and encrypt data? Do you require encryption for data in motion or just at rest? How do you coordinate workflows across devices? And finally, how much does this cost? While this is an intimidating list, we can build good practices that have evolved both prior to the advent of IoT and more recently with the increasing use of edge computing. First, let’s take a look at computation and data storage. When possible, computation should happen close to the data. By minimizing transmission time, you reduce the overall latency for receiving results. Remember that distributing computation can increase overall system complexity, creating new vulnerabilities in various endpoints, so it’s important to keep it simple. One approach is to do minimal processing on IoT devices themselves. A data collection device may just need to package a payload of data, add routing and authentication to the payload, then send it to another device for further processing. There are some instances, however, where computing close to the collection site is necessary.



Quote for the day:

"Superlative leaders are fully equipped to deliver in destiny; they locate eternally assigned destines." -- Anyaele Sam Chiyson

Daily Tech Digest - November 06, 2020

Applying particle physics methods to quantum computing

In quantum computing, which relies on quantum bits, or qubits, to carry information, the fragile state known as quantum superposition is difficult to maintain and can decay over time, causing a qubit to display a zero instead of a one—this is a common example of a readout error. Superposition provides that a quantum bit can represent a zero, a one, or both quantities at the same time. This enables unique computing capabilities not possible in conventional computing, which rely on bits representing either a one or a zero, but not both at once. Another source of readout error in quantum computers is simply a faulty measurement of a qubit's state due to the architecture of the computer. In the study, researchers simulated a quantum computer to compare the performance of three different error-correction (or error-mitigation or unfolding) techniques. They found that the IBU method is more robust in a very noisy, error-prone environment, and slightly outperformed the other two in the presence of more common noise patterns. Its performance was compared to an error-correction method called Ignis that is part of a collection of open-source quantum-computing software development tools developed for IBM's quantum computers, and a very basic form of unfolding known as the matrix inversion method.


Common Challenges Facing Angular Enterprises - Stephen Fluin at Ngconf

The top emerging concerns emerging from the conversations that Fluin had in the first trimester of this year are linked to user experience, micro front-ends, server-side rendering, monorepositories and code sharing, managing applications that are only partly Angular-based, and presenting a business case for the upgrade of Angular versions. A good user experience means fast initial load and seamless transitions. Fluin strongly recommended using the source-map-explorer npm package to monitor and analyze the composition of an Angular bundle: In enterprise conversations, this was actually identified as one of the most valuable things they had learned. Fluin also mentioned that simply by keeping up-to-date with the latest Angular versions, Angular developers will naturally benefit from smaller bundle sizes or an improved command-line interface implementing configurable optimization strategies (e.g., better bundling, server-side rendering). Fluin posited that seamless transitions between routes in Angular applications already was one of Angular’s strengths. Fluin then explained that the independent deployability characteristic of micro front-end may come into tension with the recommended use of monorepositories to address other issues such as testing, code sharing, or dependency management.


How Shell is fleshing out a digital-twin strategy

According to Shell, the deployment of the simulation technology will also enable safe asset life extension by replacing the over-conservative estimates made with conventional simulation software, with accurate assessments that reflect actual remaining fatigue life. Elohor Aiboni, asset manager for Bonga, said: “The Bonga Main FPSO heralded a number of innovative ‘firsts’ when it was built back in 2004, so it’s fitting that it is the first asset of its kind to deploy something as advanced as a structural digital twin. We are very excited about the new capabilities that Akselos brings and believe it will create a positive impact on the way we manage structural integrity. It is also a great example of digitisation coming to life.” In a recent blog post, Victor Voulgaropoulos, industry analyst at Verdantix wrote: “Shell is again in the spotlight, as it seeks to further accelerate its digital transformation initiatives by implementing digital-twin solutions across its global portfolio of assets and capital projects. Shell has signed an enterprise framework agreement with Kongsberg Digital, a Kongsberg subsidiary, for the deployment of Kongsberg’s Kognitwin Energy, a cloud-based software-as-a-service digital-twin solution, within Shell’s upstream, liquified natural gas, and downstream business lines.”


How COVID-19 Changed the VC Investment Landscape for Cybersecurity Companies

Businesses have faced the need to find new and inventive ways to survive the "new normal." For many companies, this means digitizing existing processes and relying heavily on cloud-based services to enable workers to access corporate networks from their homes. But this presents myriad new problems for businesses. While the pandemic provides vast opportunities for digital transformation, it unfortunately creates the perfect storm for data breaches and hackers, too. Social distancing restrictions have forced firms to abandon the protections in the office in favor of enabling employees to work from home, where they might not have the same robust levels of security. Of course, VCs have kept their ears to the ground and are looking to cybersecurity and artificial intelligence (AI) startups as a means to mitigate these new vulnerabilities. Cybersecurity spending is forecast to grow approximately 9% a year from 2021 to 2024, according to Gartner, as businesses invest more heavily in identifying and quickly responding to threats. While large corporations have traditionally been responsible for huge amounts of private data that make cybersecurity a priority, the new virtual backdrop across all industries means that businesses of all shapes and sizes are looking to build the capabilities and defenses needed to keep malicious actors at bay.


NHS warned over Ryuk spreading through Trickbot replacements

“In recent weeks, we assess with high confidence that BazarBackdoor has been Ryuk’s most predominant loader,” said the firm. “With lower confidence, we assess this wave of Ryuk activity may be, in part, in retaliation for September’s TrickBot disruptions.” Bazar’s components are most usually delivered in spear phishing campaigns operated via Sendgrid, a bona fide email marketing service. The emails contain links to Microsoft Office or Google Docs files, and the lure usually relates to a threat of employee termination or a debit payment. In turn, these emails link to the initial payload, a headless preliminary loader that ultimately downloads, unpacks and loads Bazar. The firm added that newer campaigns seem to forgo the spam distribution in favour of human-operated attacks against exposed admin interfaces or cloud services. Typically, once they have gained control of the target system using Bazar, Wizard Spider will download a post-exploitation toolkit, such as Cobalt Strike or Metasploit, to gather target information and enumerate the network, at which point they will harvest credentials to move into other systems and compromise the entire network – then they will deploy Ryuk ransomware. NHS Digital said current Bazar campaigns could accomplish this in under five hours.


Implementing a Staged Approach to Evolutionary Architecture

Traditionally, software architecture and design phases have been considered as initial phases. In this approach, the architecture decisions were considered valid for the entire life of the system. With the wisdom of ages and in reaction to industry transformations, we have started to see architecture as evolving. This evolution necessitates a different set of approaches in the direction of continuous planning, facilitating via continuous integration, dashboards, and tools, thus providing guide rails for systems to evolve. This article focuses on these approaches and tools to support the journey. We are in the midst of a rapidly changing environment. As Rebecca Parsons discussed in a presentation on evolutionary architecture, the changes span across business models, requirements, and customer expectations. The technology landscape also changes quite often. In a broader sense, the changes are happening at an unparalleled rate and impact on our environment. ... Smartphones reached major penetration in the last 10 years. Software, a key ingredient of all these, changes even faster. Sometimes, the software frameworks we use are no longer relevant by the time of release.


Digital Business Opportunities Surge With IoT-Based Sensors At The Edge

Sensor data from machines – wherever they are located – carries heightened importance in a pandemic-driven business environment of unpredictable starts and stops. That’s because it provides critical visibility into what’s going on within machines across the business. For example, Wallis reported a surge in customer inquiries about using IoT to accomplish maintenance tasks automatically, remotely, and safely. “Interest is high in IoT-enabled automation from organizations that want to get the job done with minimal employee risk and fewer productivity losses,” said Wallis. “Remote asset diagnostics and monitoring gives companies 24/7 visibility about machine performance, eliminating unnecessary physical maintenance calls. The same applies to procurement transparency, where sensors on items reduce the need for physical inspections.” But the benefits of IoT don’t stop there. Connected IoT-based data from machines was game-changing for a power generation company based in Italy, turning an essentially commoditized business into a value-based service that increased customer loyalty. Using SAP Internet of Things, SAP Edge Services, and SAP Predictive Maintenance and Service, the company brought data together from the edge, meaning machine performance at power plants worldwide, with data from various systems including supply chain, warehouse management, machine repair and maintenance.


Take back control of IT with cloud native IGA

Legacy solutions have painted themselves into the corner of maintaining a large amount of custom code. This makes upgrades costly, so they don’t happen. That means customers suffer by not being able to adopt new features, bug fixes and new capabilities to support their new business and compliance requirements. The primary reason why legacy software projects don’t get fully completed and go over budget is known as the 80/20 rule. Organizations can solve 80% of the problems or challenges they have with the software as it is, but everybody wants to solve that last 20%. And that 20% isn’t a quick fix – it takes 10 times the amount of time that first 80% took. Understandably, organizations want to try to tackle the more challenging problems, which always require high customization. It’s very difficult for organizations to maintain a highly customized code in their environments that the first generation of IGA products required. All those changes to the code will then need to be maintained. But modern IGA has learned from all the coding requirements of the past and now provides a much simpler way to give users different levels of access. The identity governance and administration market started with highly regulated businesses. However, all industries are now impacted.


How remote access technology is improving the world as we know it

Globalisation and a dramatic uptick in both the need and desire for remote working have resulted in a dispersed workforce — in which it is easy to lose both professional and personal connection But the unprecedented speed of digital transformation, technologies such as 5G and improving consumer hardware such as smartphones, means that the prompt adoption of Augmented Reality (AR) in remote support is rapidly coalescing to close the connection gap. ... AR can be used to upskill these employees, and train new ones. When onboarding a new member of staff, ensuring that the employee is aware of the correct protocols and procedures is often critical. For example, when a new employee is familiarising themselves with a machine, an AR-capable smartphone or tablet can provide relevant training to ensure it’s operated correctly. If this technology was not available, uncertainties could lead to a break in compliance, safety issues, or even increased downtime — all critical issues in multiple industries, including manufacturing.  Today, this technology goes beyond needing an AR-capable device to hand though. Features such as session recording and being able to take a screenshot of the live video stream are increasingly being used to create a pool of expert knowledge that is readily available on demand. 


Value vs Time: an Agile Contract Model

The cost of bug fixes is included in the price, so our interest is to have as few bugs as possible in our software. This is obviously great value for our customers, but also for users who will run into fewer bugs while using the software. To do this, we use the common agile practices and methodologies such as TDD (test-driven development), Pair Programing, Pull / merge request management, and a strict procedure of verification and human tests before releasing to the customer. Also, continuous improvement techniques such as retrospective meetings and a lot of training help us deploy higher quality software. We have a clear DoD (Definition of Done) shared with the customer for each User Story (which also covers the UX / UI mockups for each US), and the teams are autonomous in managing the implementation part, while respecting the DoD and a minimum level of quality that is guaranteed by the practices and processes listed. Including any bug-fix in the User Story development cost also has a commercial advantage for Zupit. Customers don’t always "digest" that bugs are part of the software development process and aren’t happy to pay the cost of fixing them. A model where the supplier takes care of this aspect helps us to convince customers about the quality of our work and to close contracts more easily.



Quote for the day:

"The role of leaders is not to get other people to follow them but to empower others to lead." -- Bill George

Daily Tech Digest - November 05, 2020

Deep Neural Networks Help to Explain Living Brains

Artificial neural networks are built with interconnecting components called perceptrons, which are simplified digital models of biological neurons. The networks have at least two layers of perceptrons, one for the input layer and one for the output. Sandwich one or more “hidden” layers between the input and the output and you get a “deep” neural network; the greater the number of hidden layers, the deeper the network. Deep nets can be trained to pick out patterns in data, such as patterns representing the images of cats or dogs. Training involves using an algorithm to iteratively adjust the strength of the connections between the perceptrons, so that the network learns to associate a given input (the pixels of an image) with the correct label (cat or dog). Once trained, the deep net should ideally be able to classify an input it hasn’t seen before. In their general structure and function, deep nets aspire loosely to emulate brains, in which the adjusted strengths of connections between neurons reflect learned associations. Neuroscientists have often pointed out important limitations in that comparison: Individual neurons may process information more extensively than “dumb” perceptrons do, for example, and deep nets frequently depend on a kind of communication between perceptrons called back-propagation that does not seem to occur in nervous systems.


Future of Corporate Governance Through Blockchain-powered Smart Companies

In essence, Smart Company is an entirely new form of business type (LTD., IBC) except it rivals all traditional models by being fully automated by blockchain. And certainly, it makes just that big of a difference. When you have the ability to run your business in a structure that is legally compliant yet all its transactions happen in real-time and are verified directly on the blockchain, this changes the game. What this means for business owners is that managerial ownership structures become more transparent. Corporate voting is easier, more accurate and secret strategies such as ‘empty voting’ become more difficult to execute. The ability to have corporate shares as ERC-20 tokens modified for security laws offers the means to assert and transfer ownership and liabilities of real-world assets with actual value. Just to give you a rough understanding of the magnitude of this untapped potential, it has been estimated that the total value of illiquid assets, including real estate, gold., is no less than $11 Trillion . Roughly the nominal GDP of China, the world’s second largest economy today. For shareholders, Smart Company model offers nearly free trading and transparency in ownership records while simultaneously showing real-time transfers of shares from one owner to another. 



Agile development: How to tackle complexity and get stuff done

Holt believes his key role as CTO is to create a culture in the organisation where his people feel comfortable and confident to try new things. Rather than being scared of risk-taking, he says tech leaders should encourage their IT professionals to innovate and develop customer-centred products and services in an iterative manner. "Those are the kind of people who aren't afraid of the complexity, who are able get in amongst it, and that's where you get really good solutions," he says. Holt says engaging with a challenge involes great teamwork. He says his organisation is always on the lookout for people who have an ability to manage complexity and the solution often involves agility in organisational culture as well as product development. ... Danny Attias, chief digital and information officer at British charity Anthony Nolan, says tech executives looking to deal with complexity must ensure they're working to create a joined-up organisation. More often than not, that means using Agile principles to break down problems into small parts that can be managed effectively across the organisation. "My career has been about decoupling dependencies wherever you possibly can," he says.


The world needs women who code

A lot of women are not aware of the power of IT. The industry’s reputation as a boy’s club belies the fact that women are actually rising in many technology fields, both in number and in title. They may think they have to already know a bunch of code to get started. It's likely that many women simply don’t realize how much opportunity there is for them, even as beginners. A slightly different, yet related, reason is fear. Because of the percentage of men in this field, some women may feel that there will be too much competition, that they won’t be able to measure up against men with experience, or that they'll be overlooked for men without experience. But nowadays, IT companies are making strong efforts to welcome and support women, conducting various programs to encourage women to learn about various tech disciplines, and provide pathways for them to join the industry. And whenever a woman joins this industry, it gives a boost of confidence to other women too. I constantly get inspired by the many women I know that are doing amazing things in tech. ... Admittedly, coding can seem overwhelming in the beginning, but don’t worry—it’s like that for almost everyone. Soon enough, what seems like gibberish at first starts to come together, and you learn to harness it to make things work and accomplish tasks. 


Kafka at the Edge — Use Cases and Architectures

Event streaming with Apache Kafka at the edge is not cutting edge anymore. It is a common approach to providing the same open, flexible, and scalable architecture at the edge as in the cloud or data center. Possible locations for a Kafka edge deployment include retail stores, cell towers, trains, small factories, restaurants, etc. I already discussed the concepts and architectures in detail in the past: "Apache Kafka is the New Black at the Edge" and "Architecture patterns for distributed, hybrid, edge and global Apache Kafka deployments". This blog post is an add-on focusing on use cases across industries for Kafka at the edge. To be clear before you read on: Edge is NOT a data center. And "Edge Kafka" is not simply yet another IoT project using Kafka in a remote location. Edge Kafka is actually an essential component of a streaming nervous system that spans IoT (or OT in Industrial IoT) and non-IoT (traditional data-center/cloud infrastructures). The post's focus is scenarios where the Kafka clients AND the Kafka brokers are running on the edge. This enables edge processing, integration, decoupling, low latency, and cost-efficient data processing. Some IoT projects are built like “normal Kafka projects”, i.e., built in the (edge) data center or cloud. 


How smartphones became IoT’s best friend and worst enemy

Relying on the ubiquity of smartphones and the rise of remote controls, users and vendors alike have embraced the move away from physical device interfaces. This evolution in the IoT ecosystem, however, brings major benefits AND serious drawbacks. While users enjoy the remote capabilities of companion apps and vendors bypass the need for hardware interfaces, studies show that they present serious cybersecurity risks. For example, the communication between an IoT device and its app is often not properly encrypted nor authenticated – and these issues enable the construction of exploits to achieve remote control of victim’s devices. It is important to explain that connected devices have not always been this way. I’m sure others like myself do not need to cast their minds far back to remember a time when smartphones did not even exist. User input during these halcyon days relied on physical interfaces on the device itself, interfaces that typically consisted of basic touch screens or two-line LCD displays. Though functional, these physical interfaces were certainly limited (and limiting) when compared to the applications that superseded them. Devices without physical interfaces are smaller, consume less power, and look better. 


Singapore government rolls out digital signature service

Called Sign with SingPass, the service is being rolled out by Assurity, a subsidiary of the Government Technology Agency (GovTech), together with eight digital signing application providers, including DocuSign, Adobe and Kofax. GovTech said each digital signature is identifiable and cryptographically linked to the signer, while signed documents are platform agnostic and can be viewed with the user’s preferred system. No document data will be transferred during the digital signing process. Assurity will also issue digital certificates for signatures created under the service. Upon Assurity’s accreditation under Singapore’s Electronic Transactions Act, signatures made with the service will be regarded as secure electronic signatures. GovTech said the service will be useful for organisations and their customers amid the growing number of online transactions and will test the service with the Singapore Land Authority (SLA) for the digital signing of property caveats in the coming weeks. Kok Ping Soon, chief executive of GovTech, said the high security document signing service will help businesses save cost and manpower by alleviating the need to manually verify physical paperwork.


Is your approach to data protection more expensive than useful?

With the recent increase in cyberattacks and exponential data growth, protecting data has become job one for many IT organizations. And in many cases, their biggest hurdle is managing an aging backup infrastructure with limited resources. Tight budgets should not discourage business leaders from modernizing data protection. Organizations that hang on to older backup technology don't have the tools they need to face today's threats. Rigid, siloed infrastructures aren't agile or scalable enough to keep up with fluctuations in data requirements, and they are based on an equally rigid backup approach. Traditional backup systems behave like insurance policies, locking data away until it's needed. That's like keeping an extra car battery in the garage, waiting for a possible crisis. The backup battery might seem like a reasonable preventive measure, but most of the time, it's a waste of space. And if the crisis never arises, it's an unnecessary upfront investment, making it more expensive than useful. In the age of COVID-19, where cash is king and on-site resources are particularly limited, some IT departments are postponing data protection modernization, looking to simplify overall operations and lower infrastructure costs first. That plan can block a company's progress. 


Taking Control of Confusing Cloud Costs

It’s difficult to compare services across multiple clouds, because each provider uses different terminology. What Azure calls a ‘virtual machine’ is called a ‘virtual machine instance’ on GCP and just an ‘instance’ on AWS. A group of these instances would be called ‘autoscaling groups’ on both Amazon and GCP, but Scale Sets on Azure. It’s hard to even keep up with what it is you’re purchasing and whether there is even an alternative cloud comparable service, as the naming convention is different. As outlined above in regards to the simple web application using Lambda, it would be very time consuming for someone to compare what it would cost to host a web application in one cloud versus another. It would take technical knowledge of each cloud provider to be able to translate how you could comparably host it with one set of services against another before you even got into prices. Cloud pricing uses an on-demand model, which is a far cry from on-prem, where you could deploy things and leave them running 24/7 without affecting the cost (bar energy). In the cloud, everything is based on the amount of time you use it, either on a per hour, per minute, per request, per amount or per second basis.


Five ways to avoid digital transformation fatigue

Change fatigue stems from uncertainty and a lack of clarity around the strategic intent and implementation of the program. Too often, digitalisation and new tools are being taken into the company without proper project planning and thinking about how the benefits will be explained to the employees. Have a deep-dive into the thinking of the value proposition narrative before the new digital tool is implemented. Start by finding out if the management and leadership teams are aligned on the transformation's strategic intent and outcomes. If not, then you need to go back to the drawing board. This should ideally map out clear target business outcomes as well as the impact of the transformation to the people, processes, and tools of what’s happening and how it will affect them. Many workers might feel that they should be doing their 'actual job' instead of learning how to navigate with something that they are not sure will benefit them. Be ready to present to each role the necessities of the new tool, and avoid explaining it so that it sounds like the company is the only one that will benefit from it. Incentives for the employees should be clearly stated before the change starts.



Quote for the day:

"Don't just see what others do to you. Also see what you do to others." -- The Golden Mirror

Daily Tech Digest - November 04, 2020

Reworking the Taxonomy for Richer Risk Assessments

With pre-assessment and planning, you need to think about the desired outcome (i.e., identify the risks to the facility) and identify the necessary actions to mitigate or eliminate the risks and associated vulnerabilities. The flow chart above is a detailed view of this phase and includes collecting and digesting documents, identifying the team members and the necessary skill sets, and getting ready for travel. Of course, contacting the "customer" and setting up the necessary on-site logistics are important. ... Don't forget these threats and vulnerabilities can be cyber or physical. They can also be part of the site management and culture. What about training or lack thereof? They can all contribute to the risk profile of the facility. The graphic above offers some elements of the on-site activities. You can see that we have inspections, observations, taking photographs, and looking at the site network and architecture. Even a cyber-vulnerability scan may be part of the site assessment. These activities are intended to be part of the site assessment plan. However, don't let the plan place barriers on your site risk reviews. Feel free to follow leads and evidence of problems, since that is why you are on-site rather than doing a remote risk assessment via Zoom.


How blockchain is set to revolutionize the healthcare sector

Despite its potential, data portability across multiple systems and services is a real issue. There is nothing more valuable and personal to an individual than their personal medical records, so making data shareable across services will inevitably raise concerns around the spectre of data being misused. Currently, data does not flow seamlessly across technology solutions within healthcare. For example, in the UK your hospital records do not form part of your GP records, but the advantages are clear in terms of treatment and preventative care were they to do so. Unfortunately, it is not likely a centralised storage and delivery system will get traction until there is one that can ensure the appropriate encryption and security. The risks are simply too high. Yet, it is an issue that a technology like blockchain can tackle. This is because the purpose of the chain is to store a series of transactions in a way that cannot be altered or changed. What renders it immutable is the combination of two opposing things: the cryptography and its openness. Each transaction is signed with a private key and then distributed amongst a peer to peer set of participants. Without a valid signature, new blocks created by data changes are ignored and not added to the chain. 


UX Patterns: Stale-While-Revalidate

Stale-while-revalidate (SWR) caching strategies provide faster feedback to the user of web applications, while still allowing eventual consistency. Faster feedback reduces the necessity to show spinners and may result in better-perceived user experience. ... Developers may also usestale-while-revalidate strategies in single-page applications that make use of dynamic APIs. In such applications, oftentimes a large part of the application state comes from remotely stored data (the source of truth). As that remote data may be changed by other actors, fetching it anew on each request guarantees to always return the freshest data available. Stale-while-revalidate strategies substitute the requirement to always have the latest data for that of having the latest data eventually. The mechanism works in single-page applications in a similar way as in HTTP requests. The application sends a request to the API server endpoint for the first time, caches and returns the resulting response. The next time the application will make the same request, the cached response will be returned immediately, while simultaneously the request will proceed asynchronously. When the response is received, the cache is updated, with the appropriate changes to the UI taking place.


The Inevitable Rise of Intelligence in the Edge Ecosystem

Edge computing is becoming an integral part of the distributed computing model, says Nishith Pathak, global CTO for analytics and emerging technology with DXC Technology. He says there is ample opportunity to employ edge computing across industry verticals that require near real-time interactions. “Edge computing now mimics the public cloud,” Pathak says, in some ways offering localized versions of cloud capabilities regarding compute, the network, and storage. Benefits of edge-based computing include avoiding latency issues, he says, and anonymizing data so only relevant information moves to the cloud. This is possible because “a humungous amount of data” can be processed and analyzed by devices at the edge, Pathak says. This includes connected cars, smart cities, drones, wearables, and other internet of things applications that consume on demand compute. The population of devices and scope of infrastructure that support the edge are expected to accelerate, says Jeff Loucks, executive director of Deloitte’s center for technology, media and telecommunications. He says implementations of the new communications standard have exceeded initial predictions that there would be 100 private 5G network deployments by the end of 2020. “I think that’s going to be closer to 1,000,” he says.


Take a Dip into Windows Containers with OpenShift 4.6

Windows Operating System in a container? Who would have thought?!? If you asked me that question a few years back, I would have told you with conviction that it would never happen! But if you ask me now, I will answer you with a big, emphatic yes and even show you how to do so!In this article, I will demonstrate how you can run Windows workloads in OpenShift 4.6 by deploying a Windows container on a Windows worker node. In addition, I will then highlight some of the issues and challenges that I see from a system administrator perspective. ... For customers who have heterogeneous environments with a mix of Linux and Windows workloads, the announcement of a supported Windows container feature on OpenShift 4.6 is exciting news. As of this writing, the supported workloads to run on Windows containers can be either .NET core applications, traditional .NET framework applications, or other Windows applications that run on a Windows server. So when did the work start to make Windows containers possible to run on top of OpenShift? In 2018, Red Hat and Microsoft announced the joint engineering collaboration with the goal of bringing a supported Windows containers feature into OpenShift.


GPS and water don't mix. So scientists have found a new way to navigate under the sea

Underwater devices already exist, for example to be fitted on whales as trackers, but they typically act as sound emitters. The acoustic signals produced are intercepted by a receiver that in turn can figure out the origin of the sound. Such devices require batteries to function, which means that they need to be replaced regularly – and when it is a migrating whale wearing the tracker, that is no simple task. On the other hand, the UBL system developed by MIT's team reflects signals, rather than emits them. The technology builds on so-called piezoelectric materials, which produce a small electrical charge in response to vibrations. This electrical charge can be used by the device to reflect the vibration back to the direction from which it came. In the researchers' system, therefore, a transmitter sends sound waves through water towards a piezoelectric sensor. The acoustic signals, when they hit the device, trigger the material to store an electrical charge, which is then used to reflect a wave back to a receiver. Based on how long it takes for the sound wave to reflect off the sensor and return, the receiver can calculate the distance to the UBL.  "In contrast to traditional underwater acoustic communication systems, which require each sensor to generate its own signals, backscatter nodes communicate by simply reflecting acoustic signals in the environment," said the researchers.


Temporal Tackles Microservice Reliability Headaches

Temporal consists of a programming framework (or SDK) and a managed service (or backend). The core abstraction in Temporal is a fault-oblivious stateful Workflow with business logic expressed as code. The state of the Workflow code, including local variables and threads it creates, is immune to process and Temporal service failures. Temporal supports the programming languages Java and Go, but has SDKs in the works for Ruby, Python, Node.js, C#/.NET, Swift, Haskell, Rust, C++ and PHP. In the event of a failure while running a Workflow, state is fully restored to the line in the code where the failure occurred and the process continues without developer intervention. One of the restrictions on Workflow code, however, is that it must produce exactly the same result each time it is executed, which rules out external API calls. Those must be handled through what it calls Activities, which the Workflow orchestrates. An activity is a function or an object method in one of the supported languages, stored in task queues until an available worker invokes its implementation function. When the function returns, the worker reports its result to the Temporal service, which then reports to the Workflow about completion.


The Cybersecurity Myths We Hear Ourselves Saying

There is a widely held belief — including from 19% of respondents — that the brands you can trust won't take advantage of you and that they will protect your data, as they surely do everyone else's data. However, the reality is that almost all mainstream sites are collecting data about you, and if they're not profiting off that data themselves, then there is a very good chance that hackers are. The more sites you go to, even trusted ones, the more cookies that are held in your browser. What's more, by surfing to numerous sites, not only are you providing more data about yourself, but you're also providing more pools of data that are being held by the various sites you visit. Applying basic theories of probability, increasing the number of pools increases the probability that any one of them will be breached. The hard truth is that the only way to effectively ensure privacy is to disconnect from the internet. Failing that, another good way to protect data is by encrypting internet traffic history by using a VPN. A VPN adds an extra layer of encrypted protection to a secured Wi-Fi network, preventing corporate agents from tracking you while you're online.


Running React Applications at the Edge with Cloudflare Workers

Cloudflare Workers are a cool technology introduced by Cloudflare a couple of years ago. Normally, you might have a server living in a data center somewhere in the world. You’ll likely put a CDN in front of that to handle caching and manage the load. But imagine having the power of a server directly inside your CDN’s data center. This is what Cloudflare Workers offers —a way to execute code directly at the edge of the CDN. This is a really powerful way to manage and modify requests going to and from your origin server—but it also opens up a whole new set of possibilities: instead of paying for and managing your own server, you can use Cloudflare Workers as your origin. This means lightning-fast responses directly at the edge without a round trip to another data center. ... These patterns are what inspired Flareact. Cloudflare Workers offers a Workers Sites feature that allows you to host a static site on top of Cloudflare Workers, with assets stored in a KV [Key/Value] store at the edge. This, combined with the underlying Workers dynamic platform, seemed like the perfect use case for Next.js. However, due to technical constraints, it proved too difficult to get Next.js working on Cloudflare Workers. So I set out to build my own framework modeled after Next.js.


The future is female: overcoming the challenges of being a woman in tech

Self-doubt affects everyone, but being in an industry in which you are outnumbered by thopposite gender is particularly tough. According to TrustRadius, three out of four tech professionals have experienced imposter syndrome at work, but women are 22% more likely than men to feel this way. Sheryl Sandberg even said that women in tech “hold ourselves back in ways both big and small, by lacking self-confidence, by not raising our hands, and by pulling back when we should be leaning in.” This is unsurprising, as women are typically taught not to brag from an early age. Self marketing might feel egotistical and uncomfortable at first but it definitely feels more natural with practice! Confidence comes with knowledge; with technology constantly evolving as new software and systems are created, women making their way in tech should continue to learn as much as possible. Being on top of new developments will get you noticed and make it easier to advocate for yourself. But, if you don’t feel comfortable selling yourself, let others do this for you. Ask trusted clients, colleagues and contacts to give testimonials – many will be delighted to do so – and sing the praises of those around you, as people will return the favour.



Quote for the day:

"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray