Daily Tech Digest - August 28, 2020

The Merging Of Human And Machine. Two Frontiers Of Emerging Technologies

The field of human and biological applications has many applications for medical science. This includes precision medicine, genome sequencing and gene editing (CRISPR), cellular implants, and wearables that can be implanted in the human body The medical community is experimenting with delivering nano-scale drugs (including anti-biotic “smart bombs” to target specific strains of bacteria. Soon they will be able to implant devices such as bionic eyes and bionic kidneys, or artificially grown and regenerated human organs. Succinctly, we are on the cusp of significantly upgrading the human ecosystem. It is indeed revolutionary. This revolution will expand exponentially in the next few years. We will see the merging of artificial circuitries with signatures of our biological intelligence, retrieved in the form of electric, magnetic, and mechanical transductions. Retrieving these signatures will be like taking pieces of cells (including our tissue-resident stem cells) in the form of “code” for their healthy, diseased or healing states, or a code for their ability to differentiate into all the mature cells of our body. This process will represent an unprecedented form of taking a glimpse of human identity.


Five ‘New Normal’ Imperatives for Retail Banking After COVID-19

The current financial crisis highlights an already trending need for responsible, community-minded banking. How financial institutions respond to the COVID-19 crisis — and the actions they take as the economy begins to right itself — will influence their reputations in the long-term. Personalized service and community-mindedness have never been more important. The approach to providing them, however, will often be different from the past. Data-powered audience segmentation can help banks and credit unions proactively anticipate the needs of their customers, then offer services and solutions to solve them. Voice-of-consumer and social listening tools can help financial institutions understand and monitor their brand perception. It’s important to develop a process and allocate resources to engage with consumers in the digital space. For example, when complaints or concerns are raised on social media or other channels, they should be triaged quickly and effectively. If this capability is something you previously have put off developing, it’s time to re-prioritize. According to EY’s Future Consumer Index, only 17% of consumers surveyed said they trusted their financial institutions in a time of crisis. 


The 7 Benefits of DataOps You’ve Never Heard

The convergence of point-solution products into end-to-end platforms has made modern DataOps possible. Agile software that manages, governs, curates, and provisions data across the entire supply chain enables efficiencies, detailed lineage, collaboration, and data virtualization, to name a few benefits. While many point-solutions will continue, success today comes from having a layer of abstraction that connects and optimizes every stage of the data lifecycle, across vendors and clouds, in order to streamline and protect the full ecosystem. As machine-learning and AI applications expand, the successful outcome of these initiatives depends on expert data curation, which involves the preparation of the data, automated controls to reduce the risks inherent in data analysis, and collaborative access to as much information as possible. Data collaboration, like other types of collaboration, fosters better insights, new ideas, and overcomes analytic hurdles. While often considered a downstream discipline, providing collaboration features across data discovery, augmented data management, and provisioning results in better AI/ML outcomes. In our COVID-19 age, collaboration has become even more important, and the best of today’s DataOps platforms offer benefits that break down the barriers of remote work, departmental divisions, and competing business goals.


Generation Data: the future of cloud era leaders

What’s more, with most organisations adopting multiple cloud environments, data is more fragmented than ever before. As such, businesses are looking to data governance specialists (not just data scientists, but data engineers too) to ensure that there is a catalogue of where the data resides, across the different landscapes to ensure it’s well secured and well governed. It’s important to have people who can spot risks associated with where data is – or in some cases – isn’t stored, whilst deploying artificial intelligence (AI) to adopt new roles to secure it within the cloud environments. Cloud specialists can take on several different job titles within the business and at some organisations, a single data leader like the CDO must seamlessly shift between multiple roles in order to achieve success. Meanwhile at others, a team of data leaders each having a specialised role under a unified data strategy is a model for success. What’s clear is that as data becomes part of everyone’s working lives, to ensure we’re not short on talent, businesses need to engage with a wide range of individuals such as citizen integrators and citizen analysts to upskill within existing roles and to truly democratise data. This means equipping existing and future employees with the skills needed to garner insights from existing data sets.


Standing Out with Invisible Payments: The Banking-as-a-Service Paradox

Industry players, such as regulators, FinTech partners, and businesses in the banking, financial services and insurance industries, are starting to realise that it is not ideal to be a ‘jack-of-all-trades’. In fact, the core of BaaS is built upon strategic collaboration. As such, there should be more acceptance of strengths and weaknesses from financial players so they can better identify what they are good at and what they need help with. Essentially, financial players need to ‘piggyback’ on either big banks or other financial service partners with strong regulatory license network. Furthermore, if they can identify a market that is underpenetrated, this is a good opportunity to work with existing players to fill the gap. For instance, the recent partnership between InstaReM and SBM Bank India allow users to remit money to more markets and send funds overseas in real-time. As its licensed banking partner, InstaReM will facilitate international money transfers from India to an expanded list of markets, including new destinations such as Malaysia and Hong Kong. In another example, Nium’s partnership with Geoswift, an innovative payment technology company, will enable overseas customers to remit money into China.


How state and local governments can better combat cyberattacks

Hit by ransomware and other attacks, state and local governments are obviously aware of the need for strong cybersecurity. And they have taken certain measures to beef up security. Many local governments have hired top cybersecurity people and created more effective teams. The recent Congressional Solarium Commission on Cybersecurity stressed the need for better security coordination among local governments, the federal government, and the private sector. The State and Local Government Cybersecurity Act of 2019 legislation passed last year is designed to foster a greater collaboration among the different parties. But government agencies are not all alike, especially on a local vs. state level. Differences exist in funding and preparedness. Security policies can vary from one agency to another. Plus, the effort to digitize systems and services at such a rapid pace means that security sometimes gets left behind. Looking at open-source data on 108 cyberattacks on state and local municipalities from 2017 to late 2019, BlueVoyant found that the number rose by almost 50%. Over the same time, ransomware demands surged from a low of $30,000 a month to as high as almost $500,000 in July 2019, according to the report.


How AI can enhance your approach to DevOps

Companies can resort to AI data mapping techniques to accelerate data transformation processes. At the same time, machine learning (ML) used in data mapping will also automate data integrations, allowing businesses to extract business intelligence and drive important business decisions quickly. Taking it a step further, organizations can push for AI/ML-powered DevOps for self-healing and self-managing processes, preventing abrupt disruptions and script breaks. Besides that, organizations can opt for AI to recommend solutions to write more efficient and resilient code, based on the analysis of past application builds and performance. The ability of AI and ML to scan through troves of data with higher precision will play an essential role in delivering better security. Through a centralized logging architecture, employees can detect and highlight any suspicious activities on the network. With the help of AI, organizations can track and learn of the hacker’s motive in trying to breach a system. This capability will help DevOps teams to navigate through existing threats and mitigate the impact.  Communication is also a vital component in DevOps strategy, but it’s often one of the biggest challenges when organizations move to the methodology when so much information is flowing through the system.


Shifting the mindset from cloud-first to cloud-best using hybrid IT

Those who are approaching the cloud for the first time face the classic question around which type of service to choose, public or private. Both have different use-cases and can be critical for businesses in achieving their objectives. For instance, the public cloud is agile, scalable and simple to use, great for teams looking to get up and running quickly. However, the private cloud offers its own benefits, chiefly a greater degree of control over data and performance. As organisations hosting their data in a private cloud are in full control of that data, there’s typically a more consistent security posture and a greater degree of flexibility and control over how that data is used and managed. Moreover, the private cloud can typically deliver faster and higher through-put environments for those mission critical applications that cannot run in the public cloud without business impacting performance issues. However, companies risk getting caught in the cloud divide, feeling as though public cloud is not appropriate for their enterprise applications, or that on-prem enterprise infrastructure isn’t as user-friendly, simple or scalable as the public cloud. Ultimately organisations should be able to make infrastructure choices based on what’s best for their business, not constrained by what the technology can do or where it lives.


5 Critical IT Roles for Rapid Digital Transformation

Information security leaders are the individuals who protect the information and activity of an organization. These professionals lead the charge in establishing the appropriate security standards and implementing the best policies and procedures needed to prevent security breaches and confiscated data. As more information and activity happens within the cloud infrastructure during a transformation, security has to be a top priority. Given the current situation, the rise of online activity has led to an increase in cyber-attacks. With digital transformation, businesses need assurance that their technologies are adequately protected. An InfoSec leader will help quarterback the security game plan as well as monitor for abnormal activity and handle the recovery should any issues arise. Since data analysts are there to retrieve, gather and analyze data, they hold an important role in the digital transformation journey. Technology opens the doors to a world of data that must be uncovered and understood to deliver any real value. The insights data analysts can provide allow organizations to take a data-driven approach to the decision-making process. Since there is a lot of uncertainty in the current business climate, data analysts are a huge asset. 


Facing gender bias in facial recognition technology

Our team performed two separate tests – first using Amazon Rekognition and the second using Dlib. Unfortunately, with Amazon Rekognition we were unable to unpack just how their ML modeling and algorithm works due to transparency issues (although we assume it’s similar to Dlib). Dlib is a different story, and uses local resources to identify faces provided to it. It comes pretrained to identify the location of a face, and with face location finder HOG, a slower CPU-based algorithm, and CNN, a faster algorithm making use of specialized processors found in a graphics cards. Both services provide match results with additional information. Besides the match found, a similarity score is given that shows how close a face must match to the known face. If the face on file doesn’t exist, a similarity score set to low may incorrectly match a face. However, a face can have a low similarity score and still match when the image doesn’t show the face clearly. For the data set, we used a database of faces called Labeled Faces in the Wild, and we only investigated faces that matched another face in the database. This allowed us to test matching faces and similarity scores at the same time. Amazon Rekognition correctly identified all pictures we provided.



Quote for the day:

"Hold yourself responsible for a higher standard than anybody expects of you. Never excuse yourself." -- Henry Ward Beecher

Daily Tech Digest - August 27, 2020

Different Ways In Which Enterprises Can Utilize Business Intelligence

Embedded BI is simply the integration of self-service BI into ordinarily utilized business applications. BI devices boost an improved user experience with visualization, real-time analytics and interactive reporting. A dashboard might be given within the application to show important information, or different diagrams, charts and reports might be created for immediate review. A few types of embedded BI stretch out functionality to cell phones to guarantee a distributed workforce that can approach indistinguishable business intelligence for synergistic efforts in real time. At a further advanced level, embedded BI can turn out to be a piece of workflow automation, with the goal that specific actions are set off consequently dependent on boundaries set by the end user or other decision makers. Regardless of the name, embedded BI normally is deployed close by the enterprise application instead of being facilitated within it. Both Web-based and cloud-based BI are available for use with a wide assortment of business applications. Self-Service Analytics permits end users to effectively dissect their information by making their own reports and changing existing ones without the requirement for training.


Conway's Law, DDD, and Microservices

In Domain-Driven Design, the idea of a bounded context is used to provide a level of encapsulation to a system. Within that context, a certain set of assumptions, ubiquitous language, and a particular model all apply. Outside of it, other assumptions may be in place. For obvious reasons, it's recommended that there be a correlation between teams and bounded contexts, since otherwise it's very easy to break the encapsulation and apply the wrong assumptions, language, or model to a given context. Microservices are focused, independently deployable units of functionality within an organization or system. They map very well to bounded contexts, which is one reason why DDD is frequently applied to them. In order to be truly independent from other parts of the system, a microservice should have its own build pipeline, its own data storage infrastructure, etc. In many organizations, a given microservice has a dedicated team responsible it (and frequently others as well). It would be unusual, and probably inefficient, to have a microservice that any number of different teams all share responsibility for maintaining and deploying.


Cybersecurity at a crossroads: Moving toward trust in our technologies

Many of the foundational protocols and applications simply assumed trust; tools we take for granted like email were designed for smaller networks in which participants literally knew each other personally. To address attacks on these tools, measures like encryption, complex passwords, and other security-focused technologies were applied, but that didn't address the fundamental issue of trust. All the complex passwords, training, and encryption technologies in the universe won't prevent a harried executive from clicking on a link in an email that looks legitimate enough, unless we train that executive to no longer trust anything in their inbox, which compromises the utility of email as a business tool. If we're going to continue to use these core technologies in our personal and business lives, we as technology leaders need to shift our focus from a security arms race, which is easily defeated by fallible humans, to incorporating trust into our technology. Incorporating trust makes good business sense at a basic level; I'd happily pay a bit extra for a home security device that I trust not to be mining bitcoin or sending images to hackers in a distant land, just as businesses who've seen the very real costs of ransomware would happily pay for an ability to quickly identify untrusted actors.


Deep Fake: Setting the Stage for Next-Gen Social Engineering

In order to safeguard against BEC, we often advise our clients to validate the suspicious request by obtaining second-level validations, such as picking up the phone and calling the solicitor directly. Other means of digital communications—cellular text or instant messaging—can be utilized to ensure the validity of the transaction and are highly recommended. These additional validation measures would normally be enough to thwart scams. As organizations start to elevate security awareness amongst their user community, these types of tricks are becoming less effective. But threat actors are also evolving their strategy and are finding new and novel ways of improving their chances for success. This scenario might seem far-fetched or highly fictionalized, but an attack of this sophistication was executed successfully last year. Could deep fake be utilized to enhance a BEC scam? What if threat actors can gain the ability to synthesize the voice of the company's CEO?  The scam was initially executed utilizing the synthesized voice of a company's executive, demanding the person on the other line to pay an overdue invoice.


Did Your Last DevOps Strategy Fail? Try Again

Don’t perform a shotgun wedding between ops and dev. Administrators and developers are drawn to their technology foci for personal reasons and interests. One of the most cited reasons for unsuccessful DevOps plans is a directive to homogenize the team, followed by shock this didn’t work. Developers are attracted to and rewarded for innovation and building new things, while admins take pride in finding ways to migrate the mission-critical apps everyone forgets about onto new hosting platforms. They’re complementary, integrable engineers, but they’re not interchangeable cogs. Contrary to popular opinion, telling developers they’re going to carry a pager for escalation doesn’t magically improve code quality and can slow innovation. They may even quit, even in this chaotic economy. And telling ops they need to learn code patterns, git merge and dev toolchains will be an unwelcome distraction not related to keeping their business running or meeting their personal review goals. They also may quit. It might be helpful to share with your team the simple idea you embrace: Unfounded stories of friction between dev and ops aren’t about the teams.


What is IPv6, and why aren’t we there yet?

Adoption of IPv6 has been delayed in part due to network address translation (NAT), which takes private IP addresses and turns them into public IP addresses. That way a corporate machine with a private IP address can send to and receive packets from machines located outside the private network that have public IP addresses. Without NAT, large corporations with thousands or tens of thousands of computers would devour enormous quantities of public IPv4 addresses if they wanted to communicate with the outside world. But those IPv4 addresses are limited and nearing exhaustion to the point of having to be rationed. NAT helps alleviate the problem. With NAT, thousands of privately addressed computers can be presented to the public internet by a NAT machine such as a firewall or router. The way NAT works is when a corporate computer with a private IP address sends a packet to a public IP address outside the corporate network, it first goes to the NAT device. The NAT notes the packet’s source and destination addresses in a translation table. The NAT changes the source address of the packet to the public-facing address of the NAT device and sends it along to the external destination.


Five DevOps lessons: Kubernetes to scale secure access control

Failure is a very real factor when trying to transform from a virtual and bare metal server farm to a distributed cluster, so determine how your services can scale and communicate if you’re geographically separating your data and customers. Clusters operate differently at scale than your traditional server farms, and containers have a completely different security paradigm than your average virtualised application stack. Be prepared to tweak your cluster layouts and namespaces as you begin your designs and trials. Become agile with Infrastructure as Code (IAC), and be willing to make multiple proof-of-concepts when deploying. Tests can take hours and teardown and standup can be painful when making micro-tweaks along the way. If you do this, you will remove larger scaling considerations with a good base for faster and larger scale. My advice is to keep your core components close and design for relay points or services when attempting to port into containers, or into multi-cluster designs. ... Sidecar design patterns, although wonderful conceptually, can either go incredibly right or horribly wrong. Kubernetes sidecars provide non-intrusive capabilities, such as reacting to Kubernetes API calls, setting up config files, or filtering data from the main containers.


A new IT landscape empowers the CIO to mix and match

Platforms like Zapier or Integromat that deliver off-the-shelf integrations for hundreds of popular IT applications as well as integration platforms-as-a-service (iPaas) like Jitterbit, Outsystems, or TIBCO Cloud Integration that make it easy for IT -- or even citizen developers -- to quickly remix apps and data into new solutions has dramatically changed the art-of-the-possible in IT. So, at least technically, creating new high value digital experiences out of existing IT is now not just possible, but can be made commonplace. The rest has become a vendor management, product skillset, and management/governance issue. The major industry achievements of ease-of-integration and ready IT mix-and-match must go up against the giants in the industry who have very entrenched relationships with IT departments today. That's not to say that CIOs aren't avidly interested in avoiding vendor lock-in, accelerating customer delivery, bringing more choice to their stakeholders, satisfying needs more precisely and exactly than ever before, or becoming more relevant again in general as IT is increasingly competing directly with external SaaS offering, outside service providers, and enterprise app stores, to name just three capable IT sourcing alternatives to lines of business.


Reaping Benefits Of Data Democratization Through Data Governance

We characterize the integration of data democratization with data governance as an all-encompassing approach to overseeing information that spans the governance groups and all information stakeholders, as well as the strategies and rules they make, and the metrics they measure accomplishment by. Governed data democratization permits you to clearly understand your data set and to connect all the policies and controls that apply to it. Governed data democratization is how you set up the important privacy strategies to guarantee that you maintain consumer loyalty and simultaneously ensure that your association is strictly in compliance with both external regulatory commands and internal security protocols. Furthermore, it’s on this establishment of data governance that you convey the correct information to the correct customers with the right quality and the right level of trust. Intelligent, incorporated, and efficient data governance strategy scales your company’s capacity to quickly and cost-effectively increase data management by consolidating the data governance work process with a data democratization system that incorporates self-administration capabilities.


How chatbots are making banking extra conversational

AI isn’t any new idea, after all, however its uptake within the banking business has been accelerated by consciousness of the necessity to improve digital experiences and the supply of open-source instruments from the likes of Google, Amazon, and different new entrants which — when mixed with plenty of the client and business information — have made the know-how easy, quick and highly effective. Like another enterprise, banks are below stress to maneuver rapidly with know-how or lose out to extra hungry and impressive rivals and aggressive new children on the block. With Gartner predicting that prospects will handle 85% of their relationships with an enterprise with out interacting with a human, and TechEmergence believing chatbots will change into the first shopper utility throughout the subsequent 5 years, conversational AI is now a collection focus.  And whereas digitization has been going down in banking for many years, maintaining tempo with prospects’ expectations for fast, handy, safe providers that may be accessed from wherever on any machine isn’t any imply feat, particularly as society barrels nearer to a cashless future by the day.



Quote for the day:

"You never change things by fighting the existing reality. build a new model that makes the existing model obsolete." -- Buckminster Fuller

Daily Tech Digest - August 26, 2020

New Zealand stock exchange hit by cyber attack for second day

The incident follows a number of alleged cyber attacks by foreign actors, such as the targeting of a range of government and private-sector organisations in Australia. In a statement earlier on Wednesday, the NZX blamed Tuesday’s attack on overseas hackers, saying that it had “experienced a volumetric DDoS attack from offshore via its network service provider, which impacted NZX network connectivity”. It said the attack had affected NZX websites and the markets announcement platform, causing it to call a trading halt at 3.57pm. It said the attack had been “mitigated” and that normal market operations would resume on Wednesday, but this subsequent attack has raised questions about security. A DDoS attack aims to overload traffic to internet sites by infecting large numbers of computers with malware that bombards the targeted site with requests for access. Prof Dave Parry, of the computer science department at Auckland University of Technology, said it was a “very serious attack” on New Zealand’s critical infrastructure. He warned that it showed a “rare” level of sophistication and determination, and also flagged security issues possibly caused by so many people working from home.


Disruption Happens With What You Don't Know You Don't Know

"There are things we know we don't know, and there are things we don't know we don't know." And what I'm trying to explain is the practitioners point of view. Now, when I come to you and I tell you, you know what, "You don't know this, Peter. You don't know this. And if you do this, you would be making a lot of money." You will say, "Who are you to tell me that?" So I need to build confidence first. So the first part of the discussion starts from telling you what you already know. So when you do use the data, the idea is — and to create a report, and that's what reports are for. Look at how organizations make decisions — what they do is they get a report and they take a decision on that report. But 95% of the time, I know that people who are making that decision or are reading that report know the answer in the report. That's why they're comfortable with the report, right? So let's look at a board meeting where the board has a hunch that this quarter they're going to make 25% increase in their sales. They have the hunch. Now, that is where they're going to get a report which will save 24% or 29%, it will be in the ballpark range. So there's no unknown. But if I'm only telling you what you already know, 


How the Coronavirus Pandemic Is Breaking Artificial Intelligence and How to Fix It

When trained on huge data sets, machine learning algorithms often ferret out subtle correlations between data points that would have gone unnoticed to human analysts. These patterns enable them to make forecasts and predictions that are useful most of the time for their designated purpose, even if they’re not always logical. For instance, a machine-learning algorithm that predicts customer behavior might discover that people who eat out at restaurants more often are more likely to shop at a particular kind of grocery store, or maybe customers who shop online a lot are more likely to buy certain brands. “All of those correlations between different variables of the economy are ripe for use by machine learning models, which can leverage them to make better predictions. But those correlations can be ephemeral, and highly context-dependent,” David Cox, IBM director at the MIT-IBM Watson AI Lab, told Gizmodo. “What happens when the ground conditions change, as they just did globally when covid-19 hit? Customer behavior has radically changed, and many of those old correlations no longer hold. How often you eat out no longer predicts where you’ll buy groceries, because dramatically fewer people eat out.”


How the edge and the cloud tackle latency, security and bandwidth issues

With the rise of IoT, edge computing is rapidly gaining popularity as it solves the issues the IoT has when interacting with the cloud. If you picture all your smart devices in a circle, the cloud is centralised in the middle of them; edge computing happens on the edge of that cloud. Literally referring to geographic location, edge computing happens much nearer a device or business, whatever ‘thing’ is transmitting the data. These computing resources are decentralised from data centres; they are on the ‘edge’, and it is here that the data gets processed. With edge computing, data is scrutinised and analysed at the site of production, with only relevant data being sent to the cloud for storage. This means much less data is being sent to the cloud, reducing bandwidth use, privacy and security breaches are more likely at the site of the device making ‘hacking’ a device much harder, and the speed of interaction with data increases dramatically. While edge and cloud computing are often seen as mutually exclusive approaches, larger IoT projects frequently require a combination of both. Take driverless cars as an example.


Basing Enterprise Architecture on Business Strategy: 4 Lessons for Architects

Analogous ideas regarding the primacy of the business strategy are also expressed by other authors, who argue that EA and IT planning efforts in organizations should stem directly from the business strategy. Bernard states that “the idea of Enterprise Architecture is that of integrating strategy, business, and technology”. Parker and Brooks argue that the business strategy and EA are interrelated so closely that they represent “the chicken or the egg” dilemma. These views are supported by Gartner whose analysts explicitly define EA as “the process of translating business vision and strategy into effective enterprise change”. Moreover, Gartner analysts argue that “the strategy analysis is the foundation of the EA effort” and propose six best practices to align EA with the business strategy. Unsurprisingly, similar views are also shared by academic researchers, who analyze the integration between the business strategy and EA modeling of the business strategy in the EA context. To summarize, in the existing EA literature the business strategy is widely considered as the necessary basis for EA and for many authors the very concepts of business strategy and EA are inextricably coupled


Building Effective Microservices with gRPC, Ballerina, and Go

In modern microservice architecture, we can categorize microservices into two main groups based on their interaction and communication. The first group of microservices acts as external-facing microservices, which are directly exposed to consumers. They are mainly HTTP-based APIs that use conventional text-based messaging payloads (JSON, XML, etc.) that are optimized for external developers, and use Representational State Transfer (REST) as the de facto communication technology.  REST’s ubiquity and rich ecosystem play a vital role in the success of these external-facing microservices. OpenAPI provides well-defined specifications for describing, producing, consuming, and visualizing these REST APIs. API management systems work well with these APIs and provide security, rate limiting, caching, and monetizing along with business requirements. GraphQL can be an alternative for the HTTP-based REST APIs but it is out of scope for this article. The other group of microservices are internal and don’t communicate with external systems or external developers. These microservices interact with each other to complete a given set of tasks. Internal microservices use either synchronous or asynchronous communication.


Lessons learned after migrating 25+ projects to .NET Core

One thing that you need to be aware of when jumping from .NET Framework to .NET Core, is a faster roll-out of new versions. That includes shorter support intervals too. With .NET Framework, 10 years of support wasn't unseen, where .NET Core 3 years seem like the normal interval. Also, when picking which version of .NET Core you want to target, you need to look into the support level of each version. Microsoft marks certain versions with long time support (LTS) which is around 3 years, while others are versions in between. Stable, but still versions with a shorter support period. Overall, these changes require you to update the .NET Core version more often than you have been used to or accept to run on an unsupported framework version. ... The upgrade path isn't exactly straight-forward. There might be some tools to help with this, but I ended up migrating everything by hand. For each website, I took a copy of the entire repo. Then deleted all files in the working folder and created a new ASP.NET Core MVC project. I then ported each thing one by one. Starting with copying in controllers, views, and models and making some global search-replace patterns to make it compile.


Changing How We Think About Change

Many of the most impressive and successful corporate pivots of the past decade have taken the form of changes of activity — continuing with the same strategic path but fundamentally changing the activities used to pursue it. Think Netflix transitioning from a DVD-by-mail business to a streaming service; Adobe and Microsoft moving from software sales models to monthly subscription businesses; Walmart evolving from physical retail to omnichannel retail; and Amazon expanding into physical retailing with its Whole Foods acquisition and launch of Amazon Go. Further confusing the situation for decision makers is the ill-defined relationship between innovation and change. Most media commentary focuses on one specific form of innovation: disruptive innovation, in which the functioning of an entire industry is changed through the use of next-generation technologies or a new combination of existing technologies. (For example, the integration of GPS, smartphones, and electronic payment systems — all established technologies — made the sharing economy possible.) In reality, the most common form of innovation announced by public companies is digital transformation initiatives designed to enhance execution of the existing strategy by replacing manual and analog processes with digital ones.


More than regulation – how PSD2 will be a key driving force for an Open Banking future

A crucial factor standing in the way of the acceleration towards Open Banking has been the delay to API development. These APIs are the technology that TPPs rely on to migrate their services and customer base to remain PSD2 compliant. One of the contributing factors was that the RTS, which apply to PSD2, left room for too many different interpretations. This ambiguity caused banks to slip behind and delay the creation of their APIs. This delay hindered European TPPs in migrating their services without losing their customer base, particularly outside the UK, where there has been no regulatory extension and where the API framework is the least advanced. Levels of awareness of the new regulations and changes to how customers access bank accounts and make online payments are very low among consumers and merchants. This leads to confusion and distrust of the authentication process in advance of the SCA roll-out. Moreover, because the majority of customers don’t know about Open Banking yet, they aren’t aware of the benefits. Without customer awareness and demand it may be very hard for TPPs to generate interest and uptake for their products.


Election Security's Sticky Problem: Attackers Who Don't Attack Votes

One of the lessons drawn by both sides was how inexpensive it was for the red team to have an impact on the election process. There was no need to "spend" a zero-day or invest in novel exploits. Manipulating social media is a known tactic today, while robocalls are cheap-to-free. Countering the red team's tactics relied on coordination between the various government authorities and ensuring communication redundancy between agencies. Anticipating disinformation plans that might lead to unrest also worked well for the blue team, as red team efforts to bring violence to polling places were put down before they bore fruit. The red team also tried to interfere with voting by mail; they hacked a major online retailer to send more packages through the USPS than normal, and used label printers to put bar codes with instructions for resetting sorting machines on a small percentage of those packages. While there was some slowdown, there was no significant disruption of the mail around the election.



Quote for the day:

"It's hard to get the big picture when you have a small frame of reference." -- Joshing Stern

Daily Tech Digest - August 25, 2020

When Your Heartbeat Becomes Data: Benefits and Risk of Biometrics

We haven’t even discussed the abilities to detect your walking patterns (already being used by some police agencies), monitor scents, track microbial cells or identify you from your body shape. More and more organizations are looking for contactless methods to authenticate, especially relevant today. What all these biometrics technologies have in common is that they are using some combination of physiological and behavioral methods to make sure you are you. There are certain things people just can’t fake. You can’t fake a heartbeat, which is as unique as a retinal scan or fingerprint. You can’t easily fake how you walk. Even your typing and writing styles give off a distinct and unique signature. ... Some of the best innovators are threat actors. They may not be able to replicate your heartbeat today, but what about tomorrow? The not-too-distant future could include a “Mission: Impossible“ scenario with 3D printers that generate a ‘body suit’ (think wetsuit) that can have a simulated heartbeat uploaded into it.  This all may sound like science fiction right now, but not too long ago, would it have not been silly to think that your heartbeat could be identified through clothes using a laser from over 200 yards away? 


What skills should modern IT professionals prioritise?

Though technical skills, like those accompanying cyber security and emerging tech are a focus, IT professionals are coming to realise that non-technical skills are a critical element of their career development and IT management. When asked which of these were most important, IT pros listed project management (69%), interpersonal communication (57%), and people management (53%). According to the LinkedIn 2020 Emerging Jobs Report, the demand for soft skills like communication, collaboration, and creativity will continue to rise across the SaaS industry. Despite the budget and skills issues IT professionals report, 53% of those surveyed said they’re comfortable communicating with business leadership when requesting technology purchases, investing time/budget into team trainings, and the like. Though developing tech skills is often informed by current areas of expertise, the 2020 IT Trends Report reveals strong IT performance is about more than IT skills. Interpersonal skills are commonly referred to as “soft skills”, which is misleading. They rank highly in overall importance, meaning soft skills aren’t optional. They’re human skills — everyone needs to relate to other people and speak in a way they can understand. My advice in this area would be to find a mentor, someone on your team who can help you learn. Practice your communication skills and try your hand at new specialties like project management.


Predictive analytics vs. AI: Why the difference matters

Fast forward to today. Within the information governance space, there are two terms that have been used quite frequently in recent years: analytics and AI. Often they are used interchangeably and are practically synonymous. Organizations—​as well as the software vendors that supply their needs—​have largely tapped analytics to provide deeper information beyond basic indexed searching, which typically involves applying Boolean logic to keywords, date ranges, and data types. Search concepts have expanded to filter out application-specific metadata (e.g., parsing mail distribution lists, application login time, login/logout/idle times in chat and collaborative rooms, etc.). Today's search also includes advanced capabilities such as stemming and lemmatization—methods for matching queries with different forms of words—​and proximity search, allowing searchers to find the elusive needle in the haystack. The latest whiz-bang features that are all the buzz within the information governance space are analytics (or predictive analytics) and AI (or artificial intelligence/machine learning). These are here to stay, and we are just beginning to scratch the surface of their many uses.


Too many AI researchers think real-world problems are not relevant

New machine-learning models are measured against large, curated data sets that lack noise and have well-defined, explicitly labeled categories (cat, dog, bird). Deep learning does well for these problems because it assumes a largely stable world (pdf). But in the real world, these categories are constantly changing over time or according to geographic and cultural context. Unfortunately, the response has not been to develop new methods that address the difficulties of real-world data; rather, there’s been a push for applications researchers to create their own benchmark data sets. The goal of these efforts is essentially to squeeze real-world problems into the paradigm that other machine-learning researchers use to measure performance. But the domain-specific data sets are likely to be no better than existing versions at representing real-world scenarios. The results could do more harm than good. People who might have been helped by these researchers’ work will become disillusioned by technologies that perform poorly when it matters most. Because of the field’s misguided priorities, people who are trying to solve the world’s biggest challenges are not benefiting as much as they could from AI’s very real promise.


Foundations of Deep Learning!!!

One may ask what is the difference between ANN and DL. The name Artificial Neural Network is inspired from a rough comparison of it’s architecture with human brain. Although some of the central concepts in ANNs were developed in part by drawing inspiration from our understanding of the brain, ANN models are not models of the brain. In reality, there is no great similarity between an ANN and it’s method of operation with human brain, neurons, synapses and it’s modus operandi. However the fact that the ANN is a consolidation of one or more layers of neurons, that help in solving perceptual problems - which is based human intuition, the name goes well. ANN essentially is a structure consisting of multiple layers of processing units (i.e. neurons) that take input data and process it through successive layers to derive meaningful representations. The word deep in Deep Learning stands for this idea of successive layers of representation. How many layers contribute to a model of the data is called the depth of the model. Below diagram illustrates the structure better as we have a simple ANN with only one hidden layer and a DL Neural Network (DNN) with multiple hidden layers.


COVID-19 Data Compromised in 'BlueLeaks' Incident

The Department of Homeland Security on June 29 issued an alert about "BlueLeaks" hacking of Nesential, saying a criminal hacker group called Distributed Denial of Secrets - also known as "DDS" and "DDoSecrets" - on June 19 "conducted a hack-and-leak operation targeting federal, state, and local law enforcement databases, probably in support of or in response to nationwide protests stemming from the death of George Floyd." The hacking group leaked 10 years of data from 200 police departments, fusion centers and other law enforcement training and support resources around the globe, the DHS alert noted. The 269 GB data dump was posted on June 19 to DDoSecrets' site, the hacking group said in a tweet that has since been removed. The data came from a wide variety of law enforcement sources and included personally identifiable information and data concerning ongoing cases, DDoSecrets claimed in a tweet. Several days after DDoSecrets revealed the law enforcement information through its Twitter account in June, the social media platform permanently removed the DDoSecrets account, citing Twitter rules concerning posting stolen data.


Top exploits used by ransomware gangs are VPN bugs, but RDP still reigns supreme

At the top of this list, we have the Remote Desktop Protocol (RDP). Reports from Coveware, Emsisoft, and Recorded Future clearly put RDP as the most popular intrusion vector and the source of most ransomware incidents in 2020. "Today, RDP is regarded as the single biggest attack vector for ransomware," cyber-security firm Emsisoft said last month, as part of a guide on securing RDP endpoints against ransomware gangs. Statistics from Coveware, a company that provides ransomware incident response and ransom negotiation services, also sustain this assessment; with the company firmly ranking RDP as the most popular entry point for the ransomware incidents it investigated this year. ... RDP has been the top intrusion vector for ransomware gangs since last year when ransomware gangs have stopped targeting home consumers and moved en-masse towards targeting companies instead. RDP is today's top technology for connecting to remote systems and there are millions of computers with RDP ports exposed online, which makes RDP a huge attack vector to all sorts of cyber-criminals, not just ransomware gangs.


Shoring Up the 2020 Election: Secure Vote Tallies Aren’t the Problem

“When looking at the ecosystem of election security, political campaigns can be soft targets for cyberattacks due to the inability to dedicate resources to sophisticated cybersecurity protections,” Woolbright said. “Campaigns are typically short-term, cash strapped operations that do not have an IT staff or budget necessary to promote long-term security strategies.” For state and local governments, constituents are accessing online information about voting processes and polling stations in noticeably larger numbers of late – Cloudflare said that it has seen increases in traffic ranging from two to three times the normal volume of requests since April. So perhaps it’s no coincidence that the firm found that government election-related sites are experiencing more attempts to exploit security vulnerabilities, with 122,475 such threats coming in per day (including an average of 199 SQL injection attempts per day bent on harvesting information from site visitors). “We believe there are a wide range of factors for traffic spikes including, but not limited to, states expanding vote-by-mail initiatives and voter registration deadlines due to emergency orders by 53 states and territories throughout the United States,” Woolbright said.


iRobot launches robot intelligence platform, new app, aims for quarterly updates

"We were focused on the idea that autonomous was the same as intelligence," said Angle. "We were told that wasn't intelligent and customers wanted collaboration." The COVID-19 pandemic pushed the collaboration theme with customers and robots because there was no choice. People are home more than ever so more cleaning coordination is needed. Meanwhile, iRobot found customers were home more yet had less time to clean. More time at home also meant more messes. Indeed, iRobot has seen strong demand during the COVID-19 pandemic. The company saw premium robot sales jump 43% in the second quarter with strong performance across its international business. Roomba i7 Series, s9 Series, and Braava jet m6 also performed well. For the second quarter, iRobot delivered revenue of $279.9 million, up 8% from a year ago. First-half revenue for 2020 was $472.4 million. iRobot reported second quarter earnings of $2.07 a share. Julie Zeiler, CFO of iRobot, said that Roomba was 90% of the product mix in the second quarter and the company's e-commerce business performed well.


Google Engineers 'Mutate' AI to Make It Evolve Systems Faster Than We Can Code Them

Using a simple three-step process - setup, predict and learn - it can be thought of as machine learning from scratch. The system starts off with a selection of 100 algorithms made by randomly combining simple mathematical operations. A sophisticated trial-and-error process then identifies the best performers, which are retained - with some tweaks - for another round of trials. In other words, the neural network is mutating as it goes. When new code is produced, it's tested on AI tasks - like spotting the difference between a picture of a truck and a picture of a dog - and the best-performing algorithms are then kept for future iteration. Like survival of the fittest. And it's fast too: the researchers reckon up to 10,000 possible algorithms can be searched through per second per processor (the more computer processors available for the task, the quicker it can work). Eventually, this should see artificial intelligence systems become more widely used, and easier to access for programmers with no AI expertise. It might even help us eradicate human bias from AI, because humans are barely involved. Work to improve AutoML-Zero continues, with the hope that it'll eventually be able to spit out algorithms that mere human programmers would never have thought of.



Quote for the day:

"Luck is what happens when preparation meets opportunity." -- Darrell Royal

Daily Tech Digest - August 24, 2020

What’s New In Gartner’s Hype Cycle For Emerging Technologies, 2020

Gartner believes that Composite AI will be an enabling technology for organizations that don’t have access to large historical data sets or have AI expertise in-house to complete complex analyses. Second, Gartner believes that Composite AI will help expand the scope and quality of AI applications. Early leaders in this area include ACTICO, Beyond Limits, BlackSwan Technologies, Cognite, Exponential AI, FICO, IBM, Indico, Petuum and ReactiveCore. ... The goal of Responsible AI is to streamline how organizations put responsible practices in place to ensure positive AI development and use. One of the most urgent use cases of Response AI is identifying and stopping “deep fakes” production globally. Gartner defines the category with use cases that involve improving business and societal value, reducing risk, increasing trust and transparency and reducing bias mitigation with AI. Of the new AI-based additions to the Hype Cycle this year, this is one that leads all others on its potential to use AI for good. Gartner believes responsible AI also needs to increase the explainability, accountability, safety, privacy and regulatory compliance of organizations as well.


How to ensure CIO and CMO alignment when making technology investment decisions

Often, total cost of ownership (TCO) for handling the complexity, maintenance and technical debt with new platforms can turn out to be a real burden for organisations. In fact, according to Gartner, more than three-quarters of orgnaisations found the technology buying process complex or difficult. But is this really surprising? Implementing the right technology solution for the business is often challenging due to the different priorities that CMOs and CIOs have. While for the CMO the priority is to adopt the latest innovations as soon as possible in order to stay ahead of the competition, this need has to fit the CIO’s focus on TCO for the long-term. The weight of these options is what drives a wedge between those key decision makers, creating a need to find common ground sooner. Being aligned is essential so that they can choose the right options which will allow marketing to execute on strategy and hit company targets on the one hand, and meet operational requirements for maintenance, governance and risk avoidance on the other, which are top of mind for the CIO. To ensure that the best options are selected for the business, the CMO’s priorities need to meet those of the CIO and vice versa.


Save-to-transform as a catalyst for embracing digital disruption

In this approach, businesses evolve through infrastructure investments in digital technologies. In turn, these technologies can deliver dramatic improvements in competitiveness, performance and operating efficiency. In response to the pandemic, the survey shows that organizations are evolving into a “Save-to-Thrive” mindset, in which they are accelerating strategic transformation actions specifically in response to challenges posed by COVID-19 to make shifts to their operating models, products and services and customer engagement capabilities. “The Save-to-Thrive framework will be essential to success in the next normal as companies rely on technology and digital enablement — with a renewed emphasis on talent — to improve their plans for strategic cost transformation and overall enterprise performance improvement,” said Omar Aguilar, principal and global strategic cost transformation leader, Deloitte Consulting. “Companies that react quickly and invest in technology and digital capabilities as they pursue the strategic levers of cost, growth, liquidity and talent will be best-positioned to succeed.”


How big data is solving future health challenges

Unlike many other data warehousing projects, Stringer said the focus is not just on collecting and using data if it has a specific quality level. Instead, when data is added to LifeCourse, its quality level is noted so researchers can decide for themselves if the data should or should not be used in their research. The GenV initiative relies on different technologies, but the two core pieces are the Informatica big data management platform and Zetaris. Informatica is used where traditional extract, transform and load (ETL) processes are needed because of its strong focus on usability. Stringer said this criterion was heavily weighted in the product selection process. Usability, he said, is a strong analogue for productivity. But with a dependence on external data sources and a need to integrate more data sources over the coming decades, Stringer said there needed to be a way to use new datasets wherever they resided. That was why Zetaris was chosen. Rather than rely on ETL processes, Stringer said the Zetaris platform lets GenV integrate data from sources where ETL is not viable.  


5 Key Capabilities of a Next-Gen Enterprise Architecture

Many enterprise architects look to rationalize and centralize emerging technologies, processes, and best practices, making them available to all business units in a self-service mode to accelerate digital transformation and modernization initiatives across the enterprise. By defining enterprise-wide technology standards and tools, enterprise architects strive to plan for reusability, reducing costs and future proofing the architecture as technology changes and enforcing data governance and privacy policies to democratize data so that trusted data travels securely throughout the enterprise in a frictionless, self-serve fashion. Traditional data management solutions to support next-gen architectures are expensive, manual, and require time-consuming processes, while newer emerging niche vendor solutions are fragmented. As such, they require extensive integration to stitch together end-to-end workstreams, requiring data consumers to wait months to get useful data. Therefore, a next-gen enterprise architecture must support the entire data pipeline, which includes the ability to ingest, stream, integrate, and cleanse data. 


India's National Digital Health Mission: A New Model To Enhance Health Outcomes

The digital health platform that NDHM is, is guided by an architectural blueprint called the National Digital Health Blueprint (NDHB), developed a few months earlier. The NDHB has put in place a structure to the thinking and approach. It established the vision and principles, architecture requirements and specifications, applicable standards and regulations, high-priority services, and institutional mechanisms needed to realize the mission of digital health. The NDHB is crafted to unlock enormous benefits for citizens, create new opportunities and financial, productivity, and transparency gains and make a positive contribution to growth, innovation, and knowledge sharing. A digital platform with a national footprint evokes immediate pushback as it is generally seen to steer the narrative towards centralization. The architecture deliberately and explicitly addresses this ‘concern’ to ensure that India’s overall federated structure of governance is reflected in the architecture as well. In a large country like India, where there are multiple layers of government – national (central), state, local (urban), and local (rural) – the responsibilities are distributed and this is guaranteed by the constitution.


Data Governance Should Not Threaten Work Culture

The discipline of data governance must focus on knowing who these people are, helping them to make more actionable decisions, and empowering them to become better stewards. People who define data must know what it means to define data better, and that includes providing meaningful business definitions for data and managing how often data is replicated across the organization. People who produce the data must know what quality data looks like, and they must be evaluated on the quality of the data they produce. And, the no-brainer. People in the organization who use the data, must understand how to use it, and follow the rules associated with using it appropriately. That means data consumers must follow the protection and privacy rules, the business rules, and use the data in the ethical manner spelled out by the organization. While people already define, produce, and use data, data governance requires that these people consistently follow the rules and standards for the action they take with that data. The rules and the standards are important metadata, data about the data, that must be recorded and made available to the people across the organization to assist in the discipline of data governance.


Defining a Data Governor

Without oversight, employees will misinterpret data, sensitive data may be shared inappropriately, employees will lack access to necessary data, and employees’ analysis will often be incorrect. A Data Governor will maintain and improve the quality of data and ensure your company is compliant with any regulations. It is a vital role to have for any informed company. With the exploding volume of data within companies, it has become extremely difficult for a small technical team to govern an entire organization’s data. As this trend continues, these Data Scientists and Analysts should transition themselves from their traditional reporting responsibilities to those of Data Governors. In a traditional reporting role, their day was filled with answering questions for various business groups around their needed metrics. The shift to Data Governors finds them instead creating cleaned, documented data products for those end business groups to explore themselves. This is called Democratized Data Governance, where the technical team (traditionally data gatekeepers) handles the technical aspects of governance and share the responsibilities of analytics with the end business groups.


Blockchain for Applications ~ A Multi-Industry Solution

The workings of blockchain are somewhat common knowledge now. A decentralized network of interconnected links that share all data among its peers, keeping a chronological log of each transaction. Simply put- “Everything that happens in the blockchain network is shared by all members of the network and everyone has a record of it on their individual device” Hence, in a way these block-chains form a binding link with each other and through this decentralized model of information storage, it liberates from the risk & inefficiencies of having all data stored in one place only. ... DApps or decentralized applications function without any central server to help interact with two parties. Blockchain users operate on mini-servers that work simultaneously to verify and exchange data. There are 2 kinds of blockchains, segregated on the basis of access and permissions – “Permissionless blockchain” & “permissioned blockchain”. A permissionless network grants full transparency and allows each member to verify transaction details, interact with others while staying completely anonymous. Bitcoin works on a permissionless blockchain.


How to manage your edge infrastructure and devices

Another aspect to consider when managing edge infrastructure and devices is to invest in discovery processes. “Edge by nature creates a distributed approach – accelerated by the current global pandemic – that needs a more flexible style of management,” said David Shepherd, area vice-president, pre-sales EMEA at Ivanti. “But ultimately, if we don’t know what we are managing then it becomes difficult to even start managing in a comprehensive manner. “Effective discovery processes allow an organisation to apply the right management policies at the right time. As more devices start to appear at the edge, the context of the device plays a crucial role. “This includes the type of device and the interaction it has with the infrastructure, plus its location (often remote). Understanding what a device is and how it interacts is again crucial to applying a comprehensive management approach. ... “Zero-touch provisioning, for example, enables easier onboarding of IoT devices onto an IoT cloud platform, e.g. AWS, as it enables automatic provisioning and configuration. This prevents developer error during the provisioning and configuration process, as well as provide a more secure interaction between the device and platform as the security framework had already been established on both ends during the pre-production stage.



Quote for the day:

"The hard part isn't making the decision. It's living with it." -- Jonas Cantrell

Daily Tech Digest - August 23, 2020

What we've lost in the push to agile software development, and how to get it back

Team members and business partners should not have to ask questions such as "what does that arrow mean?" "Is that a Java application?" or "is that a monolithic application or a set of microservices," he says. Rather, discussions should focus on the functions and services being delivered to the business. "The thing nobody talks about is you have to do design to get version 1," Brown says. "You have to put some foundations in place to give you a sufficient starting point to iterate, and evolve on top of. And that's what we're missing." Many software design teams keep upfront design to a minimum, assuming details will be fleshed out in an agile process as things move along. Brown says this is misplaced thinking, and design teams should incorporate more information into their upfront designs, including the type of technology and languages that are being proposed. "During my travels, I have been given every excuse you can possibly imagine for why teams should not do upfront design," he says. Some of his favorite excuses even include the question, "are we allowed to do upfront design?" Other responses include "we don't do upfront design because we do XP [extreme programming]," and "we're agile. It's not expected in agile."


Those who innovate, lead: the new normal for digital transformation

Even in normal times, IT departments struggled to meet their digital transformation goals as quickly as required. According to research, 59% of IT directors reported that they were unable to deliver all of their projects last year. Much of this is due to IT complexity and the challenges inherent in trying to integrate various data sources, applications and systems in an agile way that supports the goals of transformation. All too often, organizations rely on linking capabilities together with point-to-point integrations, which are inflexible and unsuited to the dynamism of modern IT environments. As a result, they find it hard to quickly launch innovative, customer-centric products and services, as they can’t bring together the capabilities that drive them in a cost and time-effective manner. At the same time, it’s often the case that digital transformation is left largely to the IT department. IT teams – already stretched by their day-to-day maintenance responsibilities – are increasingly tasked with driving the entire organization forward, with limited support from other teams in the business. Understandably, this has led to a widening ‘delivery gap’ between what the business expects, and what IT is able to achieve.


Fileless worm builds cryptomining, backdoor-planting P2P botnet

A fileless worm dubbed FritzFrog has been found roping Linux-based devices – corporate servers, routers and IoT devices – with SSH servers into a P2P botnet whose apparent goal is to mine cryptocurrency. Simultaneously, though, the malware creates a backdoor on the infected machines, allowing attackers to access it at a later date even if the SSH password has been changed in the meantime. “When looking at the amount of code dedicated to the miner, compared with the P2P and the worm (‘cracker’) modules – we can confidently say that the attackers are much more interested in obtaining access to breached servers then making profit through Monero,” Guardicore Labs lead researcher Ophir Harpaz told Help Net Security. “This access and control over SSH servers can be worth much more money than spreading a cryptominer. Additionally, it is possible that FritzFrog is a P2P-infrastructure-as-a-service; since it is robust enough to run any executable file or script on victim machines, this botnet can potentially be sold in the darknet and be the genie of its operators, fulfilling any of its malicious wishes.”


Post-Pandemic Digitalization: Building a Human-Centric Cybersecurity Strategy

As leaders of a global business task force responsible for advising and providing recommendations on the future of digitalization to G20 Leaders, we are doubling down on our efforts to build cyber resilience, and we urge leaders to recognize the importance of cybersecurity resilience as a vital building block of our global economy. And we must be thoughtful in our future cyber approach. A human-centric, education-first strategy will protect organizations where they are most vulnerable and get us closer to the point where cybersecurity is ingrained in our daily life rather than an afterthought. Action through collaboration, one of our guiding principles as the voice of the private sector to the G20, is the only viable option. A public-private partnership built on cooperation among large corporations, MSMEs, academic institutions, and international governments is the cornerstone of a modern and resilient cybersecurity system. A few simple but powerful actions ingrained in a global cybersecurity strategy will bring our users into the new age of digital transformation and embed a security mindset into our day-to-day, making breach attempts significantly less successful.



Event Stream Processing: How Banks Can Overcome SQL and NoSQL Related Obstacles with Apache Kafka

Traditional relational databases which support SQL and NoSQL databases present obstacles to the real-time data flows needed in financial services, but ultimately still remain useful to banks. Jackson says that databases are good at recording the current state and allow banks to join and query that data. “However, they’re not really designed for storing the events that got you there. This is where Kafka comes in. If you want to move, create, join, process and reprocess events you really need event streaming technology. This is becoming critical in the financial services sector where context is everything – to customers, this can be anything from sharing alerts to let you know you’ve been paid or instantly sorting transactions into categories.” He continues to say that Nationwide are starting to build applications around events, but in the meantime, technologies such as CDC and Kafka Connect, a tool that reliably streams data between Apache Kafka and other data systems are helping to bridge older database technologies into the realm of events. Data caching technology can also play an important role in providing real-time data access for performance-critical, distributed applications in financial services as it is a well-known and tested approach to dealing with spikey, unpredictable loads in a cost-effective and resilient way.


What is semantic interoperability in IoT and why is it important?

Semantic interoperability can today be enabled by declarative models and logic statements (semantic models) encoded in a formal vocabulary of some sort. The fundamental idea is that by providing these structured semantic models about a subsystem, other subsystems can with the same mechanisms get an unambiguous understanding of the subsystem. This unambiguous understanding is the cornerstone for other subsystems to confidently interact with (in other words, understand information from, as well send commands to) the given subsystem to achieve some desired effect. It's important to note that interoperability is beyond data exchange formats or even explicit translation of information models between a producer and a consumer. It’s about the mechanisms to enable this to happen automatically, without specific programming. There should be no need for an integrator to review thick manuals in order to understand what is really meant with a particular piece of data. It should be fully machine processable. Today, industry standards exist that greatly improve interoperability with significantly reduced effort. They do so by standardizing vocabularies and concepts.


GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about

At first glance, GPT-3 seems to have an impressive ability to produce human-like text. And we don’t doubt that it can used to produce entertaining surrealist fiction; other commercial applications may emerge as well. But accuracy is not its strong point. If you dig deeper, you discover that something’s amiss: although its output is grammatical, and even impressively idiomatic, its comprehension of the world is often seriously off, which means you can never really trust what it says. Below are some illustrations of its lack of comprehension—all, as we will see later, prefigured in an earlier critique that one of us wrote about GPT-3’s predecessor. Before proceeding, it’s also worth noting that OpenAI has thus far not allowed us research access to GPT-3, despite both the company’s name and the nonprofit status of its oversight organization. Instead, OpenAI put us off indefinitely despite repeated requests—even as it made access widely available to the media. Fortunately, our colleague Douglas Summers-Stay, who had access, generously offered to run the experiments for us. OpenAI’s striking lack of openness seems to us to be a serious breach of scientific ethics, and a distortion of the goals of the associated nonprofit.


A Google Drive 'Feature' Could Let Attackers Trick You Into Installing Malware

An unpatched security weakness in Google Drive could be exploited by malware attackers to distribute malicious files disguised as legitimate documents or images, enabling bad actors to perform spear-phishing attacks comparatively with a high success rate. The latest security issue—of which Google is aware but, unfortunately, left unpatched—resides in the "manage versions" functionality offered by Google Drive that allows users to upload and manage different versions of a file, as well as in the way its interface provides a new version of the files to the users. ... According to A. Nikoci, a system administrator by profession who reported the flaw to Google and later disclosed it to The Hacker News, the affected functionally allows users to upload a new version with any file extension for any existing file on the cloud storage, even with a malicious executable. As shown in the demo videos—which Nikoci shared exclusively with The Hacker News—in doing so, a legitimate version of the file that's already been shared among a group of users can be replaced by a malicious file, which when previewed online doesn't indicate newly made changes or raise any alarm, but when downloaded can be employed to infect targeted systems.


What is Microsoft's MeTAOS?

MeTAOS/Taos is not an OS in the way we currently think of Windows or Linux. It's more of a layer that Microsoft wants to evolve to harness the user data in the substrate to make user experiences and user-facing apps smarter and more proactive.  A job description for a Principal Engineering Manager for Taos mentions the foundational layer: "We aspire to create a platform on top of that foundation - one oriented around people and the work they want to do rather than our devices, apps, and technologies. This vision has the potential to define the future of Microsoft 365 and make a dramatic impact on the entire industry." A related SharePoint/MeTA job description adds some additional context: "We are excited about transforming our customers into 'AI natives,' where technology augments their ability to achieve more with the files, web pages, news, and other content that people need to get their task done efficiently by providing them timely and actionable notifications that understands their intents, context and adapts to their work habits." In short, MeTAOS/Taos could be the next step along the Office 365 substrate path. Microsoft officials haven't said a lot publicly about the substrate, but it's basically a set of storage and other services at the heart of Office 365. 


What Organizations Need to Know About IoT Supply Chain Risk

When it comes to IoT, IT, and OT devices, there is no software bill of materials (SBOM), though there have been some industry calls for one. That means the manufacturer has no obligation to disclose to you what components make up a device. When a typical device or software vulnerability is disclosed, an organization can fairly easily use tools such as device visibility and asset management to find and patch vulnerable devices on its network. However, without a standard requirement to disclose what components are under the hood, it can be extremely difficult to even identify what manufacturers or devices may be affected by a supply chain vulnerability like Ripple20 unless the vendor confirms it. For organizations, this challenge means pressing manufacturers for information on components when making purchasing decisions. While it is not realistic to solely base every purchasing decision based on security, the nature of these supply chain challenges demand at least gaining information in order to make the best risk calculus. What makes supply chain risk unique is that one vulnerability can affect many types of devices. 



Quote for the day:

"Learning is a lifetime process, but there comes a time when we must stop adding and start updating." -- Robert Braul