Daily Tech Digest - December 06, 2021

Why Qualcomm believes its new always-on camera for phones isn’t a security risk

Judd Heap, VP of Product Management at Qualcomm’s Camera, Computer Vision and Video departments, told TechRadar, “The always-on aspect is frankly going to scare some people so we wanted to do this responsibly. “The low power aspect where the camera is always looking for a face happens without ever leaving the Sensing Hub. All of the AI and the image processing is done in that block, and that data is not even exportable to DRAM. “We took great care to make sure that no-one can grab that data and so someone can’t watch you through your phone.” This means the data from the always-on camera won’t be usable by other apps on your phone or sent to the cloud. It should stick in this one area of the phone’s chipset - that’s what Heap is referring to as the Sensing Hub - for detecting your face. Heap continues, “We added this specific hardware to the Sensing Hub as we believe it’s the next step in the always-on body of functions that need to be on the chip. We’re already listening, so we thought the camera would be the next logical step.”


The HaloDoc Chaos Engineering Journey

The platform is composed of several microservices hosted across hybrid infrastructure elements, mainly on a managed Kubernetes cloud, with an intricately designed communication framework. We also leverage AWS cloud services such as RDS, Lambda and S3, and consume a significant suite of open source tooling, especially from the Cloud Native Computing Foundation landscape, to support the core services. As the architect and manager of site reliability engineering (SRE) at HaloDoc, ensuring smooth functioning of these services is my core responsibility. In this post, I’d like to provide a quick snapshot of why and how we use chaos engineering as one of the means to maintain resilience. While operating a platform of such scale and churn (newer services are onboarded quite frequently), one is bound to encounter some jittery situations. We had a few incidents with newly added services going down that, despite being immediately mitigated, caused concern for our team. In a system with the kind of dependencies we had, it was necessary to test and measure service availability across a host of failure scenarios.


Zero trust, cloud security pushing CISA to rethink its approach to cyber services

“When agencies hear the IG say something about how things are going with FISMA, they really pay attention. If we’re in a position to help influence that in a positive way, it’s absolutely critical that we do so,” he said. “We’ve got to pare down what we’re spending on IT and really focus on those things that matter. We have to adjust to a risk management approach in terms of how we apply architecture and capabilities across the enterprise to support the varying degrees of risk that we can absorb or manage within the within a given agency network. That’s like a huge part of what we need to continue to advocate for. But, to me, that is a significant element of the culture shift that needs to happen.” One way CISA is going to drive some of the culture and technology changes to help agencies achieve a zero trust environment is through the continuous diagnostics and mitigation program. CISA released a request for information for endpoint detection and response capabilities in October that vendors under the CDM program will implement for agencies.


DeFi’s Decentralization Is an Illusion: BIS Quarterly Review

“The decentralised nature of DeFi raises the question of how to implement any policy provisions,” the report said. “We argue that full decentralisation in DeFi is an illusion.” One element that could break this illusion is DeFi’s governance tokens, which are cryptocurrencies that represent voting power in decentralized systems, according to the report. Governance-token holders can influence a DeFi project by voting on proposals or changes to the governance system. These governing bodies are called decentralised autonomous organizations (DAO) and each one can oversee multiple DeFi projects. “This element of centralisation can serve as the basis for recognising DeFi platforms as legal entities similar to corporations,” the report said. It gave an example of how DAOs can register as limited liability companies in the state of Wyoming. “These groups, and the governance protocols on which their interactions are based, are the natural entry points for policymakers,” the report said. During Monday’s briefing, Shin explained that there are three areas regulators could address through these centralized organizational bodies.


This New Ultra-Compact Camera Is The Size of a Grain of Salt And Takes Stunning Photos

Using a technology known as a metasurface, which is covered with 1.6 million cylindrical posts, the camera is able to capture full-color photos that are as good as images snapped by conventional lenses some half a million times bigger than this particular camera. And the super-small contraption has the potential to be helpful in a whole range of scenarios, from helping miniature soft robots explore the world, to giving experts a better idea of what's going on deep inside the human body. "It's been a challenge to design and configure these little microstructures to do what you want," says computer scientist Ethan Tseng from Princeton University in New Jersey. ... One of the camera's special tricks is the way it combines hardware with computational processing to improve the captured image: Signal processing algorithms use machine learning techniques to reduce blur and other distortions that otherwise occur with cameras this size. The camera effectively uses software to improve its vision.


Top Internet of Things (IoT) Trends for 2022: The Future of IoT

Hyperconnectivity and ultra-low latency are necessary to power successful IoT solutions. 5G is the connectivity that will make more widespread IoT access possible. Currently, cellular companies and other enterprises are working to make 5G technology available in their areas to support further IoT development. Bjorn Andersson, senior director of global IoT marketing at Hitachi Vantara, an IT service management and top-performing IoT company, explained why the next wave of wider 5G access will make all the difference for new IoT use cases and efficiencies. “With commercial 5G networks already live worldwide, the next wave of 5G expansion will allow organizations to digitalize with more mobility, flexibility, reliability, and security,” Andersson said. “Manufacturing plants today must often hardwire all their machines, as Wi-Fi lacks the necessary reliability, bandwidth, or security. “5G delivers the best of two worlds: the flexibility of wireless with the reliability, performance, and security of wires. 5G is creating a tipping point. 


Zero Trust: Time to Get Rid of Your VPN

OAuth and OpenID Connect (OIDC) are standards that enable a token-based architecture, a pattern that fits exceptionally well with a ZTA. In fact, you could argue that zero trust architecture is a token-based architecture. So, how does a token-based architecture work? First, it determines who the user is or what system or service is requesting access. Then, it issues an access token. The token itself will contain different claims, depending on the resource that is being requested as well as contextual information. The claims given in the token can, for example, be determined by a policy engine such as Open Policy Agent (OPA). A policy describes the allowed access and which claims are needed to access certain resources. In the context of the access request, the token service can issue a token with appropriate claims based on that defined policy. Resources that are being accessed need to verify the identity. In modern architectures, this is typically some type of API. When the request to the API is received, the API validates the access token sent with the request. 


Breaking Up a Monolithic Database with Kong

The RESTful API software style provides an easy manner for client applications to gain access to the resources (data) they need to meet business needs. In fact, it did not take long for Javascript-based frameworks like Angular, React, and Vue to rely on RESTful APIs and lead the market for web-based applications. This pattern of RESTful service APIs and frontend Javascript frameworks sparked a desire for many organizations to fund projects migrating away from monolithic or outdated applications. The RESTful API pattern also provided a much-needed boost in the technology economy which was still recovering from the impact of the Great Recession. ... My recommended approach is to isolate a given microservice with a dedicated database. This allows the count and size of the related components to match user demand while avoiding additional costs for elements that do not have the same levels of demand. Database administrators are quick to defend the single-database design by noting the benefits that constraints and relationships can provide when all of the elements of the application reside in a single database.


Securing identities for the digital supply chain

As the world becomes more connected, governing and securing digital certificates is a business essential. As certificates’ lifespans continue to shrink, enterprises need to deploy ever more into their digital infrastructure. With greater numbers of certificates entering an organisations’ cyber space, there is more room for dangerous expirations to go unnoticed. From business-ending outages to crippling cyber attacks, the potential downside to bad management of this vital utility is huge. Unfortunately, digital certificates are still woefully mismanaged by businesses and governments world-wide. The volume of certificates being used to secure digital identities is growing exponentially, and businesses are faced with new management challenges that can’t be solved with legacy certificate automation models or outdated on-premises solutions. ... Today’s digital-first enterprise requires a modern approach to managing the exponential growth of certificates, regardless of the issuing certificate authority (CA), and one built to work within today’s complex zero trust IT infrastructure.


Lightweight External Business Rules

Traditional rule engines that enable Domain-experts to author rule sets and behaviors outside the codebase, are highly useful for a complex and large business landscape. But for smaller and less complex systems, they often turn out to be overkill and remain underutilised given the recurring cost of an on-premises or Cloud infrastructure they run on, License cost, etc. For a small team, adding any component requiring an additional skill set is a waste of its bandwidth. Some of the commercial rule engines have steep learning curves. In this article, we attempt to illustrate how we succeeded in maintaining rules outside source code to execute a medium scale system running on Java tech-stack like Spring Boot, making it easier for other users to customize these rules. This approach is suitable for a team that cannot afford a dedicated rule engine, its infrastructure, maintenance , recurring cost etc. and its domain experts have a foundation of Software or people within the team wear multiple hats.



Quote for the day:

"Coaching is unlocking a person's potential to maximize their own performance. It is helping them to learn rather than teaching them." -- John Whitmore

Daily Tech Digest - December 05, 2021

How Data Scientists Can Improve Their Coding Skills

Learning is incremental by nature and builds upon what we already know. Learning should not be drastically distant from our existing knowledge graph, which makes self-reflection increasingly important. ... After reflecting on what we have learned, the next step is to teach others with no prior exposure to the content. If we truly understand it, we can break the concept into multiple digestible modules and make it easier to understand. Teaching takes place in different forms. It could be a tutorial, a technical blog, a LinkedIn post, a YouTube video, etc. I’ve been writing long-form technical blogs on Medium and shorter-form Data Science primers on LinkedIn for a while. In addition, I’m experimenting with YouTube videos, which provide a great supplementary channel to learning. Without these two ingredients, my Data Science journey would have been more bumpy and challenging. Honestly, all of my aha moments come after extensive reflection and teaching, which is my biggest motivation to be active on multiple platforms.


5 Dashboard Design Best Practices

From a design perspective, anything that doesn’t convey useful information should be removed. Things that don’t add value like chart grids or decorations are prime examples. This can also include things that look cool but don’t really add anything to the dashboard like a gauge chart where a simple number value gives the user the same information while taking up less space. If you are conflicted, you should probably err on the side of caution and remove something if it doesn’t add any functional value. Space is a prized dashboard commodity, so you don’t want to waste any space on things that are just there to look pretty. Using proportion and relative sizing to display differences in data is another way to make data easier for viewers to quickly understand. Things like bubble charts, area charts or Sankey diagrams can be used to visually show differences that can be understood with a glance. The purpose of a dashboard is to convey information efficiently so users can make better decisions. This means you shouldn’t try to mislead people or steer them toward a certain decision.


From The Great Resignation To The Great Migration

Much has been written about The Great Resignation, the trend for over 3.4% of the US workforce to leave their jobs every month. Yes, the trend is real: companies like Amazon are losing more than a third of their workers each year, forcing employers to ramp up hiring like we have never seen before. But while we often blame the massive quit rate on the Pandemic, let me suggest that something else is going on. This is a massive and possibly irreversible trend: that of giving workers a new sense of mobility they’ve never had before. Consider a few simple facts. Today more than 45% of employees now work remotely (25% full time), which means changing jobs is a simple as getting a new email address. Only 11% of companies offer formal career programs for employees, so in many cases, the only opportunity to grow is by leaving. And wages, benefits, and working conditions are all a “work in process.” Today US companies spend 32% of their entire payroll on benefits and most are totally redesigning them to improve healthcare, flexibility, and education.


How Much Has Quantum Computing Actually Advanced?

Everyone's working hard to build a quantum computer. And it's great that there are all these systems people are working on. There's real progress. But if you go back to one of the points of the quantum supremacy experiment—and something I've been talking about for a few years now—one of the key requirements is gate errors. I think gate errors are way more important than the number of qubits at this time. It's nice to show that you can make a lot of qubits, but if you don't make them well enough, it's less clear what the advance is. In the long run, if you want to do a complex quantum computation, say with error correction, you need way below 1% gate errors. So it's great that people are building larger systems, but it would be even more important to see data on how well the qubits are working. In this regard, I am impressed with the group in China who reproduced the quantum supremacy results, where they show that they can operate their system well with low errors.


How Banks Can Bridge The Data Sharing Privacy Gap

Consent management rules regarding online advertising data collection may be tightening in numerous European Union markets. The Belgian Data Authority recently alleged that online advertising trade organization IAB Europe’s Transparency and Consent Framework (TFC) breaches the EU’s General Data Protection Regulation (GDPR). Statements from the Irish Council for Civil Liberties (ICCL), one of the legal coordinators on the case, also alleged IAB Europe was aware its consent popups violated GDPR. The case highlights why EU entities must pay careful attention to how consent management standards are changing to ensure they remain compliant. Experts also predict that GDPR regulatory oversight surrounding consent management will increase in 2022, meaning organizations must carefully look at how they structure consent boxes and other forms provided to customers. It is also becoming increasingly important for consumers to understand what data they share and which entities may access their information. 


ECB Paper Marks Success Factors for CBDCs, Digital Euro

The first one is ‘merchant acceptance’ which has to be wide, meaning users should be able to pay digitally anywhere. Unlike paper cash, a digital currency is likely to come with fees for each transaction and require dedicated devices to process the payments. There are other differences as well, despite both forms of money having legal tender status. The ECB elaborates: ... The second success factor has been defined as ‘efficient distribution.’ The ECB officials quote a Eurosystem report, according to which a digital euro should be distributed by supervised intermediaries such as banks and regulated payment providers. To encourage the distribution of the central bank digital currency, incentives may be paid to supervised intermediaries. The document divides intermediary services into two categories: onboarding and funding services — which would include operations required to open, manage, and close a CBDC account — and payment services.


Let there be light: Ensuring visibility across the entire API lifecycle

When approaching API visibility, the first thing we have to recognize is that today's enterprises actively avoid managing all their APIs through one system. According to IBM's Tony Curcio, Director of Integration Engineering, many of his enterprise customers already work with hybrid architectures that leverage classic on-premise infrastructure while adopting SaaS and IaaS across various cloud vendors. These architectures aim to increase resilience and flexibility, but are well aware that it complicates centralization efforts' to: 'These architectures aim to increase resilience and flexibility, but at the cost of complicating centralization efforts In these organizations, it is imperative to have a centralized API location with deployment into each of these locations, to ensure greater visibility and better management of API-related business activities. The challenge for security teams is that there isn't one central place where all APIs are managed by the development team - and as time passes, that complexity is likely to only get worse.


DevOps for Quantum Computing

Like any other Azure environment, quantum workspaces and the classical environments can be automatically provisioned by deploying Azure Resource Manager templates. These JavaScript Object Notation (JSON) files contain definitions for the two target environments: The quantum environment contains all resources required for executing quantum jobs and storing input and output data: an Azure Quantum workspace connecting hardware providers and its associated Azure Storage account for storing job results after they are complete. This environment should be kept in its separate resource group. This allows separating the lifecycle of these resources from that of the classical resources;  The classical environment contains all other Azure resources you need for executing the classical software components. Types of resources are highly dependent on the selected compute model and the integration model. You would often recreate this environment with each deployment. You can store and version both templates in a code repository (for example, Azure Repos or GitHub repositories).


Is the UK government’s new IoT cybersecurity bill fit for purpose?

The bill outlines three key areas of minimum security standards. The first is a ban on universal default passwords — such as “password” or “admin” — which are often preset in a device’s factory settings and are easily guessable. The second will require manufacturers to provide a public point of contact to make it simpler for anyone to report a security vulnerability. And, the third is that IoT manufacturers will also have to keep customers updated about the minimum amount of time a product will receive vital security updates. This new cybersecurity regime will be overseen by an as-yet-undesignated regulator, that will have the power to levy GDPR-style penalties; companies that fail to comply with PSTI could be fined £10 million or 4% of their annual revenue, as well as up to £20,000 a day in the case of an ongoing contravention. On the face of it, the PSTI bill sounds like a step in the right direction, and the ban on default passwords especially has been widely commended by the cybersecurity industry as a “common sense” measure.


Werner Vogel’s 6 Rules for Good API Design

Once an API is created, it should never be deleted, or changed. “Once you put an API out there, businesses will build on top of it,” Vogels said, adding that changing the API will basically break their businesses. Backward capability is a must. This is not to say you can’t modify, or improve the API. But whatever changes you make shouldn’t alter the API such that calls coming in from the previous versions won’t be affected. As an example, AWS has enhanced its Simple Storage Service (S3) in multiple ways since its launch in 2006, but the first-generation APIs are still supported. The way to design the APIs is to not start with what the engineers think would make for a good API. Instead, figure out what your users need from the API first, and then “work backwards from their use cases. And then come up with a minimal and simplest form of API that you can actually offer,” Vogels said. As an example, Vogels described an advertisement system that can be used for multiple campaigns.



Quote for the day:

"Leaders are visionaries with a poorly developed sense of fear and no concept of the odds against them." -- Robert Jarvik

Daily Tech Digest - December 04, 2021

Universal Stablecoins, the End of Cash and CBDCs: 5 Predictions for the Future of Money

Many of the features that decentralized finance, or DeFi, brings to the table will be copied by regular finance in the future. For instance, there’s no reason that regular finance can’t copy the automaticity and programmability that DeFi offers, without bothering with the blockchain part. Even as regular finance copies the useful bits from DeFi, DeFi will emulate regular finance by pulling itself into the same regulatory framework. That is, DeFi tools will become compliant with anti-money laundering/know your customer (AML/KYC) rules, Securities and Exchange Commission-registered or licensed with the Office of the Comptroller of the Currency (OCC). And not necessarily because they are forced to do so. (It’s hard to force a truly decentralized protocol to do anything.) Tools will comply voluntarily. Most of the world’s capital is licit capital. Licit capital wants to be on regulated venues, not illegal ones. To capture this capital, DeFi has no choice but to get compliant. The upshot is that over time DeFi and traditional finance (TradFi) will blur together. 


10 Rules for Better Cloud Security

Security in the cloud is following a pattern known as the shared responsibility model, which states that the provider is only responsible for security ‘of’ the cloud, while customers are responsible for security ‘in’ the cloud. This essentially means that to operate in the cloud, you still need to take your share of work for secure configuration and management. The scope of your commitment can vary widely because it depends on the services you are using: if you’ve subscribed to an Infrastructure as a Service (IaaS) product, you are responsible for OS patches and updates. If you only require object storage, your responsibility scope will be limited to data loss prevention. Despite this great diversity, there are some guidelines that apply no matter what your situation is. And the reason for this is simply because all the cloud vulnerabilities are essentially reduced to one thing: misconfigurations. Cloud providers have put at your disposal powerful security tools, yet we know that they will fail at some point. People make mistakes, and misconfigurations are easy. 


Unit testing vs integration testing

Tests need to run to be effective. One of the great advantages of automated tests is that they can run unattended. Automating tests in CI/CD pipelines is considered a best practice, if not mandatory according to most DevOps principles. There are multiple stages when the system can and should trigger tests. First, tests should run when someone pushes code to one of the main branches. This situation may be part of a pull request. In any case, you need to protect the actual merging of code into main branches to make sure that all tests pass before code is merged. Set up CD tooling so code changes deploy only when all tests have passed. This setup can apply to any environment or just to the production environment. This failsafe is crucial to avoid shipping quick fixes for issues without properly checking for side effects. While the additional check may slow you down a bit, it is usually worth the extra time. You may also want to run tests periodically against resources in production, or some other environment. This practice lets you know that everything is still up and running. Service monitoring is even more important to guard your production environment against unwanted disruptions.


Vulnerability Management | A Complete Guide and Best Practices

Managing vulnerabilities helps organizations avoid unauthorized access, illicit credential usage, and data breaches. This ongoing process starts with a vulnerability assessment. A vulnerability assessment identifies, classifies, and prioritizes flaws in an organization's digital assets, network infrastructure, and technology systems. Assessments are typically recurring and rely on scanners to identify vulnerabilities. Vulnerability scanners look for security weaknesses in an organization's network and systems. Vulnerability scanning can also identify issues such as system misconfigurations, improper file sharing, and outdated software. Most organizations first use vulnerability scanners to capture known flaws. Then, for more comprehensive vulnerability discovery, they use ethical hackers to find new, often high-risk or critical vulnerabilities. Organizations have access to several vulnerability management tools to help look for security gaps in their systems and networks.


How Web 3.0 is Going to Impact the Digital World?

The concept of a trustless network is not new. The exclusion of any so-called “trusted” third parties from any sort of virtual transactions or interactions has long been an in-demand ideology. Considering how data theft is a prominent concern among internet users worldwide, trusting third parties with our data doesn’t seem right. Trustless networks ensure that no intermediaries interfere in any online transactions or interactions. A close example of truthfulness is the uber-popular blockchain technology. Blockchain is mostly used in transactions involving cryptocurrencies. It defines a protocol as per which only the individuals participating in a transaction are connected in a peer-to-peer manner. No intermediary is involved. Social media enjoys immense popularity today. And understandably so, for it allows us to connect and interact with known ones and strangers sans any geographical limits. But firms that own social media platforms are few. And these few firms hold the information of millions of people. Sounds scary right? 


Is TypeScript the New JavaScript?

As a static language, TypeScript performs type checks upon compilation, flagging type errors and helping developers spot mistakes early on in development. Reducing errors when working with large codebases can save hours of development time. Clear and readable code is easy to maintain, even for newly onboarded developers. Because TypeScript calls for assigning types, the code instantly becomes easier to work with and understand. In essence, TypeScript code is self-documenting, allowing distributed teams to work much more efficiently. Teams don’t have to spend inordinate amounts of time familiarizing themselves with a project. TypeScript’s integration with editors also makes it much easier to validate the code thanks to context-aware suggestions. TypeScript can determine what methods and properties can be assigned to specific objects, and these suggestions tend to increase developer productivity. TypeScript is widely used to automate the deployment of infrastructure and CI/CD pipelines for backend and web applications. Moreover, the client part and the backend can be written in the same language—TypeScript.


4 signs you’re experiencing burnout, according to a cognitive scientist

One key sign of burnout is that you don’t have motivation to get any work done. You might not even have the motivation to want to come to work at all. Instead, you dread the thought of the work you have to do. You find yourself hating both the specific tasks you have to do at work, as well as the mission of the organization you’re working for. You just can’t generate enthusiasm about work at all. A second symptom is a lack of resilience. Resilience is your ability to get over a setback and get yourself back on course. It’s natural for a failure, bad news, or criticism to make you feel down temporarily. But, if you find yourself sad or angry for a few days because of something that happened at work, your level of resilience is low. When you’re feeling burned out, you also tend to have bad interactions with your colleagues and coworkers. You find it hard to resist saying something negative or mean. You can’t hide your negative feelings about things or people that can upset others. In this way, your negative feelings about work become self-fulfilling, because they actually create more unpleasant situations.


Spotting a Modern Business Crisis — Before It Strikes

Modern technologies such as more-efficient supply chain operations, the internet, and social media have not only increased the pace of change in business but have also drawn more attention to its impact on society. Fifty years ago, oversight of companies was largely the domain of regulatory agencies and specialized consumer groups. What the public knew was largely defined by what businesses were required to disclose. Today, however, public perception of businesses is affected by a diverse range of stakeholders — consumers, activists, local or national governments, nongovernmental organizations, international agencies, and religious, cultural, or scientific groups, among others. ... There are a few ways businesses can identify risks. One, externalize expertise through insurance and consulting companies that identify sociopolitical or climate risks. Two, hire the right talent for risk assessment. Three, rely on government agencies, media, industry-specific institutions, or business leaders’ own experience of risk perception. A fail-safe approach is to use all three mechanisms in tandem, if possible.


Today’s Most Vital Question: What is the Value of Your Data?

Data has latent value; that is, data has potential value that has not yet realized. And the possession of data in of itself provides zero economic value, and in fact, the possession of data has associated storage, management, security, and backup costs and potential regulatory and compliance liabilities. ... Data must be “activated” or put into use in order to convert that latent (potential) value of data into kinetic (realized) value. The key is getting the key business stakeholders to envision where and how to apply data (and analytics) to create new sources of customer, product, service, and operational value. The good news is that most organizations are very clear as to where and how they create value. ... The value of the organization’s data is tied directly to its ability to support quantifiable business outcomes or Use Cases. ... Many data management and data governance projects stall out because organizations lack a business-centric methodology for determining which of their data sources are the most valuable. 


Federal watchdog warns security of US infrastructure 'in jeopardy' without action

The report was released in conjunction with a hearing on securing the nation’s infrastructure held by the House Transportation and Infrastructure Committee on Thursday. Nick Marinos, the director of Information Technology and Cybersecurity at GAO, raised concerns in his testimony that the U.S. is “constantly operating behind the eight ball” on addressing cyber threats. “The reality is that it just takes one successful cyberattack to take down an organization, and each federal agency, as well as owners and operators of critical infrastructure, have to protect themselves against countless numbers of attacks, and so in order to do that, we need our federal government to be operating in the most strategic way possible,” Marinos testified to the committee. According to the report, GAO has made over 3,700 recommendations related to cybersecurity at the federal level since 2010, and around 900 of those recommendations have not been addressed. Marinos noted that 50 of the unaddressed concerns are related to critical infrastructure cybersecurity.



Quote for the day:

"Self-control is a critical leadership skill. Leaders generally are able to plan and work at a task over a longer time span than those they lead." -- Gerald Faust

Daily Tech Digest - December 03, 2021

IT threat evolution Q3 2021

Earlier this year, while investigating the rise of attacks against Exchange servers, we noticed a recurring cluster of activity that appeared in several distinct compromised networks. We attribute the activity to a previously unknown threat actor that we have called GhostEmperor. This cluster stood out because it used a formerly unknown Windows kernel mode rootkit that we dubbed Demodex; and a sophisticated multi-stage malware framework aimed at providing remote control over the attacked servers. The rootkit is used to hide the user mode malware’s artefacts from investigators and security solutions, while demonstrating an interesting loading scheme involving the kernel mode component of an open-source project named Cheat Engine to bypass the Windows Driver Signature Enforcement mechanism. ... The majority of GhostEmperor infections were deployed on public-facing servers, as many of the malicious artefacts were installed by the httpd.exe Apache server process, the w3wp.exe IIS Windows server process, or the oc4j.jar Oracle server process.


USB Devices the Common Denominator in All Attacks on Air-Gapped Systems

There have been numerous instances over the past several years where threat actors managed to bridge the air gap and access mission-critical systems and infrastructure. The Stuxnet attack on Iran — believed to have been led by US and Israeli cybersecurity teams — remains one of the most notable examples. In that campaign, operatives managed to insert a USB device containing the Stuxnet worm into a target Windows system, where it exploited a vulnerability (CVE-2010-2568) that triggered a chain of events that eventually resulted in numerous centrifuges at Iran's Natanz uranium enrichment facility being destroyed. Other frameworks that have been developed and used in attacks on air-gapped systems over the years include South Korean hacking group DarkHotel's Ramsay, China-based Mustang Panda's PlugX, the likely NSA-affiliated Equation Group's Fanny, and China-based Goblin Panda's USBCulprit. ESET analyzed these malware frameworks, and others that have not be specifically attributed to any group such as ProjectSauron and agent.btz.


How to do data science without big data

When you have visibility on the organizational strategy and the business problems to be solved, the next step is to finalize your analytics approach. Find out whether you need descriptive, diagnostic, or predictive analytics and how the insights will be used. This will clarify the data you should collect. If sourcing data is a challenge, phase out the collection process to allow for iterative progress with the analytics solution. For example, executives at a large computer manufacturer we worked with wanted to understand what drove customer satisfaction, so they set up a customer experience analytics program that started with direct feedback from the customer through voice-of-customer surveys. Descriptive insights presented as data stories helped improve the net promoter scores during the next survey. Over the next few quarters, they expanded their analytics to include social media feedback and competitor performance using sources such as Twitter, discussion forums, and double-blind market surveys. To analyze this data, they used advanced machine learning techniques.


Applying Social Leadership to Enhance Collaboration and Nurture Communities

Social leadership seems to differ as it is not a form of leadership that is granted, as is often the case in formal hierarchical environments. Organisations that have more “traditional management” structures and approaches tend to grant managers authority, accountabilities and power. Also, as I imagine you have seen, there has been much commentary over the years on the fact that management and leadership are not the same things. Some years ago when I was undertaking the Chartered Manager program with the Chartered Management Institute(CMI), I came across the definition that Management is “doing things right,” whereas leadership is “doing the right thing”. I find this succinct explanation of the difference refreshing and have continued to use this within my own coaching and mentoring work since. It feels to me that “doing the right thing” is the modus operandi of the social leader. Also, we talk a lot about the problems with accidental managers: those who have been promoted into managerial roles, often by having in the past been successful in their technical domains.


Report: APTs Adopting New Phishing Methods to Drop Payload

"When an RTF Remote Template Injection file is opened using Microsoft Word, the application will retrieve the resource from the specified URL before proceeding to display the lure content of the file. This technique is successful despite the inserted URL not being a valid document template file," Raggi says. Researchers demonstrated a process in which the RTF file was weaponized to retrieve the documentation page for RTF version from a URL at the time the file is opened. "The technique is also valid in the .rtf file extension format, however a message is displayed when opened in Word which indicates that the content of the specified URL is being downloaded and in some instances an error message is displayed in which the application specifies that an invalid document template was utilized prior to then displaying the lure content within the file," Raggi says. The weaponization part of the RTF file is made possible by creating or altering an existing RTF file’s document property bytes using a hex editor, which is a computer program that allows for manipulation of the fundamental binary data.


A blockchain connected motorbike: what Web 3.0 means for mobility and why you should care

We’ve been hearing about the potential of Web 3.0 for years – a decentralized web where information is distributed across nodes, making it more resistant to shutdowns and censorship. Specifically, its foundation lies in edge computing, artificial intelligence, and decentralized data networks. But what we haven’t talked enough about, is the massive impact Web 3.0 will have on mobility. Web 3.0 aims to build a new scalable economy where transactions are powered by blockchain technology, eschewing the need for a central intermediary or platform. And in the mobility space, there are lots of things happening. ... Pave Bikes connect to a private blockchain network. When you get your bike, you receive a non-fungible token (NFT). This is effectively a private key or token-based on ERC721. It is used to unlock the ebike via the Pave+ App. To be exact, the Pave mobile app is technically a dApp, a decentralized application connected to the blockchain. It enables riders to securely authenticate their proof of purchase and access their bike using Bluetooth, even without an internet connection.


Open banking will continue its exponential rise in the UK in 2022

Over the next year and beyond, it will be interesting to see how Variable Recurring Payments (VRPs) will continue to develop to allow businesses to connect to authorised payment providers to make payments on the customer’s behalf. Direct debits, which is the main mechanism in use today, are expensive, slow and have a painful, mainly paper-based process today. This is long overdue for digital transformation. I anticipate 2022 will be the year we begin to see VRPs in full effect. This will provide countless opportunities for consumers to find new ways to manage their finances. As VRPs progress, we will discover that they will do far more than simply paying bills and will unlock aspects of smart saving, one-click payments, and control over subscriptions. It will also be important to address issues that work against the great benefits of open banking in the near future. The 90-day reauthorisation rule, which requires open banking providers to re-confirm consent with the customer every 90 days, must be addressed. This rule currently undermines the principles of convenience and ease that open banking has been working on showcasing.


Major trends in online identity verification for 2022

As both consumer and investor demand for fintech startups continues to heat up, we expect to see even more neobanks and cryptocurrency investment platforms launching in the coming year. Unfortunately, bad actors are ready and they often target these nascent platforms, with the expectation that fraud prevention may be an afterthought at launch. But we expect that, as these startups go to market, these companies will shift their initial focus from purely optimizing for new user sign-ups to preventing fraud on their platforms, shifting from the required risk and compliance checks to more comprehensive anti-fraud solutions. Fortunately, there are ID verification solutions that can help with both, preventing fraud while still optimizing for sign-up conversions. Likewise, the tight hiring market for software developers will lead these new fintech firms to look for no-code or low-code ID verification and compliance solutions, rather than attempting to build them in-house.


AI-Based Software Testing: The Future of Test Automation

The success of digital technologies, and by their extension, businesses, is underpinned by the optimal performance of the software systems that form the core of operations in these enterprises. Many times, such enterprises make a trade-off between delivering a superior user experience and a faster time to market. As a consequence, the quality of the software systems often suffers from inadequacies, and enterprises cannot make much of their early ingress into the market. This results in the loss of revenue and brand value for such enterprises. The alternative is to go for comprehensive and rigorous software testing to find and fix bugs before the actual deployment. In fact, methodologies such as Agile and DevOps have given enterprises the means to achieve both: a superior user experience and a faster time to market. This is where AI-based automation comes into play and makes testing accurate, comprehensive, predictive, cost-effective, and quick. Artificial Intelligence, or AI, has become the buzzword for anything state-of-the-art or futuristic and is poised to make our lives more convenient.


Will Automation Fill Gaps Left by the ‘Great Resignation’?

From Lane’s perspective, the main areas DevOps teams should be looking to automate are continuous integration and continuous delivery (CI/CD), IaC and AIOps-enabled incident management platforms. “By taking the manual nature of day-to-day work off of DevOps engineers’ plates, they are freed to focus on digital transformation,” he said. “The number-one stumbling block is not starting with process.” Lane noted unless you understand all the steps in a procedure that you’re trying to automate, it is very difficult to maximize the power of automation tools. “Much of the process that is still adhered to today is outdated for the digital age,” he said. “Spend the time up front to map out what you hope to achieve with an automation project, what all the touchpoints are and how one can measure the quality of automation when it’s implemented.” Michaels added that while the internet is flooded by companies shouting they have the “best” tools, that proclamation of “best” is going to be determined by budget and known languages.



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg

Daily Tech Digest - December 02, 2021

Web 3.0: The New Internet Is About to Arrive

Some experts believe this decentralized Web, which is also referred to as Web 3.0, will bring more transparency and democratization to the digital world. Web 3.0 may establish a decentralized digital ecosystem where users will be able to own and control every aspect of their digital presence. Some hope that it will put an end to the existing centralized systems that encourage data exploitation and privacy violation. ... As a user, you will have a unique identity on Web 3.0 that will enable you to access and control all your assets, data, and services without logging in on a platform or seeking permission from a particular service provider. You will be able to access the internet from anywhere for free, and you will be the only owner of your digital assets. Apart from experiencing the internet on a screen in 2D, users will also get to participate in a larger variety of 3D environments. From anywhere, you could visit the 3D VR version of any historical place you search, play games while being in the game as a 3D player, try clothing on your virtual self before you buy. 


Report: Aberebot-2.0 Hits Banking Apps and Crypto Wallets

Based on the Aberebot-2 creator's claim and Cyble's findings, the banking malware's new variant appears to have multiple capabilities. It can steal information such as SMS, contact lists and device IPs, and it also can perform keylogging and detection evasion by disabling Play Protect - Google's safety check that is designed to detect spurious apps, according to the researchers. Cyble says the "new and improved" version of the banking Trojan can steal messages from messaging apps and Gmail, inject values into financial applications, collect files on the victim's device and inject URLs to steal cookies. Medhe says that Aberebot-2.0 has 18 different permissions, including internet permission, and 11 of the permissions are dangerous. One key difference between the earlier and the latest version of the Aberebot malware, he says, is the use of the Telegram API. "In the newer version, the malware author has included features such as the ability to inject or modify values in application forms, such as receiver details or the amount during financial transactions.


New Ransomware Variant Could Become Next Big Threat

Symantec's investigation of Yanluowang activity showed the former Thieflock affiliate is using a variety of legitimate and open source tools in its campaign to distribute the ransomware. This has included the use of PowerShell to download a backdoor called BazarLoader for assisting with initial reconnaissance and the subsequent delivery of a legitimate remote access tool called ConnectWise. To move laterally and identify high-value targets, such as an organization's Active Directory server, the threat actor has used tools such as SoftPerfect Network Scanner and Adfind, a free tool for querying AD. "The tool is frequently abused by threat actors to find critical servers within organizations," Neville says. "The tool can be used to extract information pertaining to machines on the network, user account information, and more." Other tools the attacker is using in Yanluowang attacks include several for credential theft, such as GrabFF for dumping passwords from Firefox, a similar tool for Chrome called GrabChrome, and one for Internet Explorer and other browsers called BrowserPassView.


Cloud computing is evolving: Here's where it's going next

"The era of multi-cloud is here, driven by digital transformation, cost concerns and organizations wanting to avoid vendor lock-in. Incredibly, more than half of the respondents of our survey have already experienced business value from a multi-cloud strategy," said Armon Dadgar, co-founder and CTO, HashiCorp in a statement. "However, not all organizations have been able to operationalize multi-cloud, as a result of skills shortages, inconsistent workflows across cloud environments, and teams working in silos." ... The focus is now on overcoming the various barriers to successful multi-cloud deployment, which include skills shortages and workflow differences between cloud environments. Cloud spend management is a continuing issue, while infrastructure automation tools are becoming increasingly important, particularly when it comes to provisioning and application deployment. In five years' time, we won't be talking about the pros and cons of hybrid/multi-cloud architecture. Instead, the discussion will be all about enterprises as efficient developers of industry-specific cloud-native apps, and automatic, optimised and AI-driven workload deployment.


Recovering from ransomware: One organisation’s inside story

As far as the ransom demand itself was concerned, the service provider warned that it was important Manutan not respond, even more so that it not pay. In the case of this particular gang, as soon as the victim shows up to negotiate, the criminals activate a three-week timer at the end of which – if there is no resolution – they make good on a series of threats, disclosing the victim’s sensitive information and irreparably destroying the data. Therefore, to pretend that Manutan had not yet realised it had been attacked – in effect, to play dead – would serve to buy it valuable time. In terms of actually paying, this could make the gang ask for more and would not provide any guarantee that the data would be recovered. “We spent time determining what data they had recovered and the risk it posed. We concluded that it was not critical – for example, they did not access our contracts with suppliers. Then we evaluated our ability to put a functioning IT system back together, which we could do, and we decided that we would not pay,” says Marchandiau.


How Decryption of Network Traffic Can Improve Security

Today, it’s nearly impossible to tell the good from the bad without the ability to decrypt traffic securely. The ability to remain invisible has given cyberattackers the upper hand. Encrypted traffic has been exploited in some of the biggest cyberattacks and exploit techniques of the past year, from Sunburst and Kaseya to PrintNightmare and ProxyLogon. Attack techniques such as living-off-the-land and Active Directory Golden Ticket are only successful because attackers can exploit organizations’ encrypted traffic. Ransomware is also top of mind for enterprises right now, yet many are crippled by the fact that they cannot see what is happening laterally within the east-west traffic corridor. Organizations have been wary to embrace decryption due to concerns around compliance, privacy and security, as well as performance impacts and high compute costs. But there are ways to decrypt traffic without compromising compliance, security, privacy or performance. Let’s debunk some of the common myths and misconceptions.


5 (more) Common Misconceptions about Scrum

Many people think that Scrum Team members shouldn’t be assigned to a team part-time. However, there is nothing in the Scrum Guide prohibiting it. There are, of course, trade-offs for part-time Scrum Team members. If too many individuals are part-time, the team may not accomplish as much meaningful work during a Sprint. Additionally, with part-time members it can be more difficult for the team to learn how much work they can achieve during a Sprint, particularly if a member’s part-time status fluctuates. Moreover, if the part-time members support multiple Scrum Teams, they can feel exhausted attending numerous Daily Scrum meetings and splitting their focus. The Scrum Team should consider these trade-offs when self-organizing into teams that include part-time members. ... Timeboxes are an essential part of all Scrum events because they help limit waste and support empiricism, making decisions based on what is known. For example, the result of the Sprint Planning event should be enough of a plan for the team to get started. 



What Will AI Bring to the Cybersecurity Space in 2022

When you deploy AI to monitor your company network, for example, it creates an activity profile for every user in that network. What files they access, what apps they use, when, and where. If that behavior suddenly changes, the user is flagged for a deep scan. This is a vast improvement in threat detection. Currently, a lot of time is lost before an attack is even noticed. According to IBM’s 2020 Data Breach Report, businesses take 280 days on average to detect and contain a breach. That’s plenty of time for hackers to cause massive damage. AI cuts that time short. It instantly spotlights irregularities, allowing businesses to contain breaches fast. One of the major issues with this, however, is the fact that there's always a strong risk that some clean behaviors may appear as though they are problematic when they're not. Current generation ML-based threat detection algorithms rely almost exclusively on the adaptation of neural networks that more or less replicate the perceived functioning of human thought patterns. These systems use validation subroutines that crosscheck behavior patterns against previous behaviors.
So far, only 9 countries have commercialized 5G mmWave. However, this is not surprising given that, the main restriction of mmWave transmissions is their low propagation range. Telecom companies would not employ the mmWave frequency band for national coverage. Looking at telecom operators’ deployment strategies, we can see that low-frequency bands (for example, 700 MHz) are used for national coverage, whereas sub-6 GHz bands are utilized for city coverage, and mmWave is used for megacity hotspots. ... One crucial part of deploying a large-scale 5G network employing massive MIMO gear is that the radio must be lightweight and have a compact footprint, as these characteristics will help operators save significant money on overall deployment. This is where silicon comes in. Si’s performance will have a huge influence on a radio’s essential aspects, such as connection, capacity, power consumption, product size, and weight, and, ultimately, cost. In the 5G system sector, all of these are critical.


7 ways to balance agility and planning

By building learning and development (L&D) into planning, your organization can enhance employee engagement and investment in strategic goals. A Quantum Workplace trend report found employee engagement was at its peak in 2020 (up 3 percent from 2019), with 77 percent of employees reporting high engagement. Spring and fall of 2020 indicated the greatest engagement levels at 80 percent, with a 7 percent drop by the summer of 2021. Leadership communication also tapered off since the emergence of COVID, creating a downward trend in employees’ transparency, communication, and leadership trust perceptions. Consequently, many employees felt their career paths were stunted or unclear. These findings underscore the importance of L&D in keeping employees engaged and motivated and in fostering more consistent communication between managers and their teams. From the organization’s perspective, employees are encouraged to flex their adaptability muscles as they learn, galvanizing them to become more agile and enabling the organization to pivot efficiently.



Quote for the day:

"It is, after all, the responsibility of the expert to operate the familiar and that of the leader to transcend it." -- Henry A. Kissinger

Daily Tech Digest - December 01, 2021

Does Your Organization Need a Data Diet?

The scenario is all-too-familiar: There’s a security breach, and afterward, the affected organization asks what it must do to better protect its data. But what if that organization never collected and stored that sensitive information in the first place? Often, the best defense against an embarrassing and costly breach is to collect only data that is essential to an entity’s mission. Some who work in computer privacy circles refer to this as “going on a data diet.” They know the temptation is great for organizations to bulk up on data of all kinds. After all, storage costs are low. With the move to the cloud, an organization doesn’t even need to invest in hardware and upkeep to store the information it collects. So why not grab whatever data a customer or client is willing to provide? Because we live in a time when the only sensible approach to computer security is to wonder when your entity might be breached—not if. That’s why it pays to not only go on a data diet, but to adopt a regimen for keeping your organization’s databases lean and, as a result, your customer relationships healthy.


DeFi Opens New Possibilities for Banks Willing to Embrace Change

Banks have already shown that they are aware of the growing urgency to pivot fully into the digital age. FinTechs have been key drivers in the development of banking alternatives, offering customers new ways to pay and manage their money, and urgency is building for banks to adapt. Authentication plays a role in the customer experience, and DeFi can help improve trust between financial institutions (FIs) and the customers who trust them with their money, the panelists said. Vicandi told PYMNTS that the growth in DeFi is shaking the very core of financial services. ... “They’ve produced their own products, sell those products though their networks, to their own end customers.” But in the current environment and with the emergence of blockchains, Vicandi said that DeFi exists as a threat not only to the regional banks which tend to lack the scale and of their larger national and international brethren, but as an existential and cultural change to finance as we know it.


Solutions against overfitting for machine learning on tabular data

The simplest way to detect overfitting is to look at the performance between your train and test data. If your model performs very well on the training data and poorly on the test data, your model is probably overfitting. ... Cross-validation is an often used alternative to the train/test split. The idea is to avoid creating a train/test split by fitting the model multiple times on a part of the data, while testing on the other part of the data. At the end, all parts of the data have been used for both training and testing. The cross-validation score is the average score on all the test evaluations that have been done throughout the process. The cross-validation score is based only on test data, and therefore it is a great way to have one metric instead of a separate train and test metric. We could say that cross-validation is equivalent to the test score of the model. Using the cross-validation score will not help you to detect overfitting, because it will not show you the training performance. So it is not a way to detect overfitting, but the cross-validation score can tell you that your test performances are bad (without the reason).


Working Together as Embedded Engineering Teams

Part of the value of these long-term pairings is you build relationships and context. It is expensive to move people because it breaks these connections. Most embedded organizations tend to move people more than they should. It can be frustrating to have embeds on your team if you can’t rely on them to stay. Yet it is tempting for the embed’s manager to move people around. They want to react to changing needs, and sometimes there aren’t enough people to go around. This can be a source of friction. A new embedded person has a more complex situation than new employees do. They have another manager and things outside your team they may be paying attention to. Kick off the relationship with explicit conversations. Spend twice the care you would with onboarding a generalist employee. Gus Shaffer offers this advice: “One thing that I found helpful when embedding staff engineers was to conduct a standard Kick-off process with a Statement of Work as output. Early on I learned the hard way that leaving success criteria loose leads to lingering engagements/disappointment/confusion.”


Microsoft under fire in Europe for OneDrive bundling; legal fight brewing

Lead by Nextcloud, a coalition of European Union (EU) software and cloud organizations and companies formed the "Coalition for a Level Playing Field.” “Microsoft’s combination of the dominant Windows (operating software) with the OneDrive (cloud) offering makes it nearly impossible to compete with their SaaS services,” Nextcloud said in a blog post. “It illustrates anti-competitive practices such as ‘self-preferencing’ on the basis of the market dominance of Windows.” Frank Karlitschek, CEO and founder of Nextcloud, also pointed out in his blog that over the last several years, Microsoft, Google, and Amazon have grown their market shares to 66% of the total European market, while local European software and service providers have declined from 26% to 16%. “Behavior as described above is at the core of this dramatic level of growth of the global tech giants in Europe,” Karlitschek said. “This should be addressed without any further delay.” “There are deliberate, abusive practices and those practices are no accidents. Other Big Tech firms are showing similar conduct.


RansomOps: Detecting Complex Ransomware Operations

It’s possible for organizations to defend themselves at each stage of a ransomware attack. In the delivery stage, for instance, they can use malicious links or malicious macros attached documents to block suspicious emails. Installation gives security teams the opportunity to detect files that are attempting to create new registry values and to spot suspicious activity on endpoint devices. When the ransomware attempts to establish command and control, security teams can block outbound connection attempts to known malicious infrastructure. They can then use threat indicators to tie account compromise and credential access attempts to familiar attack campaigns, investigate network mapping and discovery attempts launched from unexpected accounts and devices. Defenders can flag resources that are attempting to gain access to other network resources with which they don’t normally interact, and discover attempts to exfiltrate data as well as encrypt files. 


A unique quantum-mechanical interaction between electrons and topological defects in layered materials

"Once we first identified the anomaly in electronic conductivity, we remained very puzzled," says Edoardo Martino, the study's first author. "The material was behaving like a pretty standard metal whose electrons move along the plane, but when forced to move between planes its behavior became that of neither a metal nor an insulator, and was unclear what else to expect. It was thanks to a discussion with our fellow colleagues and theoretical physicists that we were pushed in the right direction: just apply a magnetic field and see what happens." After applying the magnetic field, the EPFL scientists realized that the more powerful the magnet, the more exotic the material's behavior becomes. They started experimenting with 14 Tesla superconducting magnets available at EPFL, but soon they realized they needed more. Working with the Laboratoire National des Champs Magnétiques Intenses in Grenoble and Toulouse, they accessed some of the world's most powerful magnets. 


The purpose of “purpose”

It starts with a goal that feels directly connected to the business, rather than a lofty statement that could be used by dozens or hundreds of other organizations. “It has to be real and tangible and live,” said author Margaret Heffernan when I interviewed her about her most recent book, Uncharted: How to Navigate the Future. “It has to be something people feel that they can do.” These are the difficult questions that every leader must wrestle with, even though they may seem philosophical and not directly relevant to the bottom line: Why do you matter? How do you make a difference? What would be lost if your organization went out of business? Healthcare businesses can make a credible case that they are improving or saving people’s lives. A nonprofit is often founded with a clear idea of the impact it wants to have. But the job can seem trickier if you are in a kind of commodity business. Imagine for a second that you run, say, a company that processes beets for sugar. How do you build a purpose around that? Paul Kenward took up that challenge. As managing director of British Sugar, which is based in the east of England, he faced the task of defining a sense of purpose for the company.


Report: No Patch for Microsoft Privilege Escalation Zero-Day

The flaw found under the "Access work or school" settings can only be triggered by clicking on "export your management log files" and confirming by pressing "export," he says. "At that point, the Device Management Enrollment Service is triggered, running as Local System. This service first copies some log files to the MDM Diagnostics folder, and then packages them into a CAB file whereby they're temporarily copied to Temp folder. The resulting CAB file is then stored in the MDM Diagnostics folder, where the user can freely access it," Kolsek notes. He highlights that while copying the CAB file to the Temp folder is vulnerable, a local attacker could create a soft link with a predictable file name used in routine export processes, directing to some file or folder that the attacker would want to have copied, in a location accessible to the attacker. "Since the Device Management Enrollment Service runs as Local System, it can read any system file that the attacker can't," Kolsek notes.


Design Patterns for Serverless Systems

In agile programming, as well as in a microservice-friendly environment, the general approach to designing and coding has changed from the monolith era. Instead of stuffing all the logic into a single functional unit, agile and microservice developers prefer more granular services or tasks obeying the single responsibility principle (SRP). Keeping this in mind, developers can decompose complex functionality into a series of separately manageable tasks. Each task gets some input from the client, executes its specific responsibility consuming that input, and generates some output, which is then transferred to the next task. Following this principle, multiple tasks constitute a chain of tasks. Each task transforms input data into the required output, which is an input for the next task. These transformers are traditionally known as filters and the connector to pass data from one filter to another is known as a pipe. A very common usage of the pipes and filter pattern is the following: when a client request arrives at the server, the request payload must go through a filtering and authentication process



Quote for the day:

"Strategy is not really a solo sport _ even if you_re the CEO." -- Max McKeown