Daily Tech Digest - March 31, 2022

OmniML releases platform for building lightweight ML models for the edge

“Today’s AI is too big, as modern deep learning requires a massive amount of computational resources, carbon footprint, and engineering efforts. This makes AI on edge devices extremely difficult because of the limited hardware resources, the power budget, and deployment challenges,” said Di Wu, cofounder and CEO of OmniML. “The fundamental cause of the problem is the mismatch between AI models and hardware, and OmniML is solving it from the root by adapting the algorithms for edge hardware,” Wu said. “This is done by improving the efficiency of a neural network using a combination of model compression, neural architecture rebalances, and new design primitives.” This approach, which grew out of the research of Song Han, an assistant professor of electrical engineering and computer science at MIT, uses a “deep compression” technique that reduces the size of the neural network without losing accuracy, so the solution can better optimize ML models for different chips and devices at the networks edge.


Kestra: A Scalable Open-Source Orchestration and Scheduling Platform

It is built upon well-known tools like Apache Kafka and ElasticSearch. The Kafka architecture provides scalability: every worker in Kestra cluster is implemented as a Kafka consumer and the state of the execution of a workflow is managed by an executor implemented with Kafka Streams. ElasticSearch is used as a database that allows displaying, searching and aggregating all the data. The concept of a workflow, called Flow in Kestra, is at the heart of the platform. It is a list of tasks defined with a descriptive language based on yaml. It can be used to describe simple workflows but it allows more complex scenarios such as dynamic tasks and flow dependencies. Flows can be based on events such as results of other flows, detection of files from Google Cloud Storage or results of a SQL query. Flows can also be scheduled at regular intervals based on a cron expression. Furthermore, Kestra exposes an API to trigger a workflow from any application or simply start it directly from the Web UI.


Chaos Engineering Was Missing from Harness’ CI/CD Before ChaosNative Purchase

Chaos engineering has emerged as an increasingly essential process to maintain reliability for applications in cloud native environments. Unlike pre-production testing, chaos engineering involves determining when and how software might break in production by testing it in a non-production scenario. Think of chaos engineering as an overlap between reliability testing and experimenting with code and applications across a continuous integration/continuous delivery (CI/CD) pipeline, by obtaining metrics and data about how an application might fail when certain errors are induced. Specific to ChaosNative’s offerings that Harness has purchased, ChaosNative Litmus Enterprise has helped DevOps and site reliability engineers (SREs) to adopt chaos engineering tools that are self-managed, while the cloud service, ChaosNative Litmus Cloud, offers a hosted LitmusChaos control plane. Indeed, chaos engineering has become increasingly critical for DevOps teams, especially those seeking to increase agility by being able to apply chaos engineering to the very beginning of the production cycle.


CIO interview: Craig York, CTO, Milton Keynes University Hospital

“Going into multiple systems is a pain for our clinicians,” he says. “It’s not very efficient, and you need to keep track of the different patients that you’re looking at across the same timeframe. We have embedded the EDRM system within Cerner Millennium, so our EPR is the system of record for our clinicians. “You click a button and it logs you into the medical records and you can scan through those as you wish. You’re in the right record, there’s more efficiency, and there’s better patient safety as well.” The hospital’s internal IT team undertook the project working in close collaboration with software developers at CCube Solutions. York says it was a complex project, but by working together to achieve set goals, the integration of EDRM and EPR systems is now delivering big benefits for the hospital. “Sometimes in healthcare, we underplay how complicated things are,” he says. “The work with CCube is an example of where we’ve asked an organisation to step up and deliver on our requirements, and they’ve done it and they’ve proved their capabilities. We are now reaping rewards from that effort, so I’m thankful to them for that.”


Continuous Machine Learning

Now imagine a machine learning (ML) and data scientists team trying to achieve the same, but with an ML model. There are a few complexities involved  Developing ML model isn’t the same as developing a software. Most of the code is essentially a black-box, difficult to pinpoint issues in ML code. Verifying ML code is an art unto itself, static code checks and code quality checks used in software code aren’t sufficient, we need data checks, sanity checks and bias verification. ... CML injects CI/CD style automation into the workflow. Most of the configurations are defined in a cml.yaml config file kept in the repository. In the example below this file specifies what actions are supposed to be performed when the feature branch is ready to be merged with the main branch. When a pull request is raised, the GitHub Actions utilize this workflow and perform activities specified in the config file - like run the train.py file or generate an accuracy report. CML works with a set of functions called CML Functions. These are predefined bits of code that help our workflow like allowing these reports to be published as comments or even launching a cloud runner to execute the rest of the workflow.


Cybercriminals’ phishing kits make credential theft easier than ever

Phishing kits make it easier for cybercriminals without technical knowledge to launch phishing campaigns. Yet another reason lies in the fact that phishing pages are frequently detected after a few hours of existing and are quickly shut down by providers. The hosting providers are often alerted by internet users who receive phishing emails and pull the phishing page down as soon as possible. Phishing kits make it possible to host multiple copies of phishing pages faster, enabling the fraud to stay up for longer. Finally, some phishing kits provide anti-detection systems. They might be configured to refuse connections from known bots belonging to security or anti-phishing companies, or search engines. Once indexed by a search engine, a phishing page is generally taken down or blocked faster. Countermeasures used by some kits might also be using geolocation. A phishing page targeting one language should not be opened by someone using another language. And some phishing kits are using slight or heavy obfuscation to avoid being detected by automated anti-phishing solutions.


What is the Spanning Tree Protocol?

Spanning Tree is a forwarding protocol for data packets. It’s one part traffic cop and one part civil engineer for the network highways that data travels through. It sits at Layer 2 (data link layer), so it is simply concerned with moving packets to their appropriate destination, not what kind of packets are being sent, or the data that they contain. Spanning Tree has become so ubiquitous that its use is defined in the IEEE 802.1D networking standard. As defined in the standard, only one active path can exist between any two endpoints or stations in order for them to function properly. Spanning Tree is designed to eliminate the possibility that data passing between network segments will get stuck in a loop. In general, loops confuse the forwarding algorithm installed in network devices, making it so that the device no longer knows where to send packets. This can result in the duplication of frames or the forwarding of duplicate packets to multiple destinations. Messages can get repeated. Communications can bounce back to a sender. It can even crash a network if too many loops start occurring, eating up bandwidth without any appreciable gains while blocking other non-looped traffic from getting through.


5 elements of employee experience that impact customer experience and revenue growth

Companies are leaving money on the table - Breaking silos between employee experience and customer experience can lead to a massive opportunity for revenue growth of up to 50% or more. Companies think they have to choose between prioritizing employee or customer experiences - And customer experience is winning. Approximately nine in 10 C-suite members (88%) say employees are encouraged to focus on customers' needs above all else, even though the C-suite knows that a powerful customer experience starts with an employee-first approach. Five core elements of employee experience impact customer experience and growth - Trust, C-Suit Accountability, Alignment, Recognition, and Seamless Technology. There is a disconnect between C-suite perception and employee experience - 71% of C-suite leaders report their employees are engaged with their work when in reality, only 51% of employees say they are; 70% of leaders report their employees are happy, while only 44% of employees report they are. 


Don’t take data for granted

Thanks to the emergence of cloud-native architecture, where we containerize, deploy microservices, and separate the data and compute tiers, we now bring all that together and lose the complexity. Dedicate some nodes as Kafka sinks, generate change data captures feed on other nodes, and persisted data on other nodes, and it’s all under a single umbrella on the same physical or virtual cluster. And so as data goes global, we have to worry about governing it. Increasingly, there are mandates for keeping data inside the country of origin, and depending on the jurisdiction, varying rights of privacy and requirements for data retention. Indirectly, restrictions on data movement across national boundaries is prompting the question of hybrid cloud. There are other rationales for data gravity, especially with established back office systems managing financial and customer records, where the interdependencies between legacy applications may render it impractical to move data into a public cloud. Those well-entrenched ERP systems and the like represent the final frontier for cloud adoption.


What makes a digital transformation project ethical?

One of the best ways to approach ethical digital transformation is to look to your community. This is your core user base and might be made up of customers, peers, and your own people from within your organisation. Though it can be a time-consuming process, engaging with the community on your digital transformation plans has a number of benefits when driving forward an ethical initiative. Crucially, consulting both internal and external stakeholders can help to identify any unanticipated policy concerns or technical issues. This is inherently valuable from a technical standpoint, as building out channels of communication and feedback allows you to fix mistakes while remaining agile and constructive. Raising community concerns is especially important when ethics are a part of your organisation’s mission statement. Not only does gathering feedback highlight any potential hidden concerns around the digital products you will be building, but engagement also goes hand in hand with both perceived and actual transparency, as gathering valuable feedback requires a degree of openness about the project.



Quote for the day:

"We get our power from the people we lead, not from our stars and our bars." -- J. Stanford

Daily Tech Digest - March 30, 2022

The Promise of Analog AI

In neural networks, the most common operator is multiply-accumulate. You multiply sets of numbers and then sum them up, as used in matrix multiplication that’s the backbone of deep learning. If you store the inputs as arrays, you can actually do this in one swoop by utilizing physical engineering laws (Ohm’s Law to multiply, Kirchoff’s Law to sum) on a full matrix in parallel. This is the crux of analog AI. If it was that easy, analog AI would already be used. Why aren’t we using analog AI yet? ... Right now, analog AI works successfully for multiply-accumulate operations. For other operations, it is still ideal to provide their own circuitry, as programming nonvolatile memory devices takes longer and results in faster wear and tear than traditional devices. Inference does not typically require reprogramming these devices, since weights rarely change. For training, however, they would require constant reconfiguration. In addition, analog’s variability results in a mismatch between error in forward propagation (inference) and backpropogation (calculating error during training). This can cause issues during training.


Computing’s new logic: The distributed data cloud

A common pattern in analytic ecosystems today sees data produced in different areas of the business pushed to a central location. The data flows into data lakes and is cordoned in data warehouses, managed by IT personnel. The original producers of the data, often subject-matter experts within the business domain, effectively lose control or become layers removed from data meaningful to their work. This separation diminishes the data’s value over time, with data diverted away from its business consumers. Imagine a new model that flips this ecosystem on its head by breaking down barriers and applying common standards everywhere. Consider an analytics stack that could be deployed within a business domain; it remains there, owned by team members in that business domain, but centrally operated and supported by IT. What if all data products generated there were completely managed within that domain? What if other business teams could simply subscribe to those data products, or get API access to them? An organizational pattern —data mesh — that promotes this decentralization of data product ownership has received a great deal of attention recently.


New program bolsters innovation in next-generation artificial intelligence hardware

Based on use-inspired research involving materials, devices, circuits, algorithms, and software, the MIT AI Hardware Program convenes researchers from MIT and industry to facilitate the transition of fundamental knowledge to real-world technological solutions. The program spans materials and devices, as well as architecture and algorithms enabling energy-efficient and sustainable high-performance computing. “As AI systems become more sophisticated, new solutions are sorely needed to enable more advanced applications and deliver greater performance,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “Our aim is to devise real-world technological solutions and lead the development of technologies for AI in hardware and software.” The inaugural members of the program are companies from a wide range of industries including chip-making, semiconductor manufacturing equipment, AI and computing services, and information systems R&D organizations.


The Data Center of the Future

The data center of the future will have to be vendor agnostic. No matter the hardware or underlying virtual machine or container technology, operating and administration capabilities should be seamless. This flexibility enables companies to streamline their deployment and maintenance processes and prevents vendor lock-in. And because no cloud provider is present everywhere in the world, the ideal data center should have the ability to run in any environment in order to achieve the distribution requirements discussed above. For that reason, new data centers will largely be made of open source components in order to achieve such a level of interoperability. Distribution and flexibility should not come at the expense of ease of use. Data centers must allow for seamless cloud native capabilities, such as the ability to scale computing and storage resources on demand, as well as API access for integrations. While this is the norm for containers and virtual machines on servers, the same capabilities should apply across environments, even for remote devices such as IoT and edge servers.


Exchange Servers Speared in IcedID Phishing Campaign

The new campaign starts with a phishing email that includes a message about an important document and includes a password-protected ZIP archive file attached, the password for which is included in the email body. The email seems extra convincing to users because it uses what’s called “thread hijacking,” in which attackers use a portion of a previous thread from a legitimate email found in the inbox of the stolen account. “By using this approach, the email appears more legitimate and is transported through the normal channels which can also include security products,” researchers wrote. The majority of the originating Exchange servers that researchers observed in the campaign appear to be unpatched and publicly exposed, “making the ProxyShell vector a good theory,” they wrote. ProxyShell is a remote-code execution (RCE) bug discovered in Exchange Servers last year that has since been patched but has been throttled by attackers. Once unzipped, the attached file includes a single “ISO” file with the same file name as the ZIP archive that was created not that long before the email was sent. 


Web3 and the future of data portability: Rethinking user experiences and incentives on the internet

Web3 offers many advantages. Namely, data flows freely and is publicly verifiable. Companies no longer need to build user authentication using things like passwords into their applications. Instead, users can have a single account for the internet in their Web3 wallet: think of this as a “bring-your-own-account” architecture where the user verifies their account as they browse different websites, without the need to create a unique username and password for every site. Because authentication is based on public-key cryptography, certain security gaps with the Web2 approach to authentication (e.g., weak passwords and password reuse) are nonexistent. Users don’t have to remember passwords or fill out multiple screens when they sign up for an application. As with everything in tech, there are disadvantages, too. Web3 eliminates the password, but it introduces other weaknesses. Anybody who has tried to set up a Web3 wallet like MetaMask knows that the user experience (UX) can be foreign and unfriendly. 


Building a Culture of Full-Service Ownership

At its core, service ownership is about connecting humans to technologies and services and understanding how they map to critical business outcomes. Achieving this level of ownership requires an understanding of what and who delivers critical business services. A clear understanding of what the boundaries and dependencies are of a given service along with what value it delivers is the starting point. And once it’s in production, a clear definition of who is responsible for it at any given time and what its impact is if it isn’t running optimally or, worst case, fails altogether. Empowering developers with this information brings DevOps teams much closer to their customers, the business and the value they create which, in turn, leads to better application design and development. Building a culture around a full-service ownership model keeps developers closely tied to the applications they build and, therefore, closer to the value they deliver. Within the organization, this type of ownership breaks down long-established centralized and siloed teams into cross-functional full-service teams.


Strategies for Assessing and Prioritizing Security Risks Such as Log4j

After gaining full visibility, it’s not uncommon for organizations to see tens of thousands of vulnerabilities across large production infrastructures. However, a list of theoretical vulnerabilities is of little practical use. Of all the vulnerabilities an enterprise could spend time fixing, it's important to identify which are the most critical to the security of the application and therefore must be fixed first. To be able to determine this, it's important to understand the difference between a vulnerability, which is a weakness in deployed software that could be exploited by attackers for particular result, and exploitability, which indicates the presence of an attack path that can be leveraged by an attacker to achieve a tangible gain. Vulnerabilities that require high-privilege, local access in order to exploit are generally of lesser concern because an attack path would be difficult to achieve for a remote attacker. Of higher concern are vulnerabilities that can be triggered by, for example, remote network traffic that would generally not be filtered by firewall devices, and which are present on hosts that routinely receive traffic directly from untrusted, internet sources.


Digital Forensics Basics: A Practical Guide for Kubernetes DFIR

Containerization has gone mainstream, and Kubernetes won out as the orchestration leader. Building and operating applications this way provides massive elasticity, scalability, and efficiency in an ever accelerating technology world. Although DevOps teams have made great strides in harnessing the new tools, the benefits don’t come without challenges and tradeoffs. Among them is the question of how to perform a DFIR Kubernetes, extract all relevant data, and clean up your systems when a security incident occurs in one of these modern environments. ... Digital Forensics and Incident Response (DFIR) is the cybersecurity field that includes the techniques and best practices to adopt when an incident occurs focused on the identification, inspection, and response to cyberattacks. Maybe you are familiar with DFIR on physical machines or on information system hardware. Its guidelines are based on carefully analyzing and storing the digital evidence of a security breach, but also responding to attacks in a methodical and timely manner. All of this minimizes the impact of an incident, reduces the attack surface, and prevents future episodes.


Security tool guarantees privacy in surveillance footage

Privid allows analysts to use their own deep neural networks that are commonplace for video analytics today. This gives analysts the flexibility to ask questions that the designers of Privid did not anticipate. Across a variety of videos and queries, Privid was accurate within 79 to 99 percent of a non-private system. “We’re at a stage right now where cameras are practically ubiquitous. If there's a camera on every street corner, every place you go, and if someone could actually process all of those videos in aggregate, you can imagine that entity building a very precise timeline of when and where a person has gone,” says MIT CSAIL ... Privid introduces a new notion of “duration-based privacy,” which decouples the definition of privacy from its enforcement — with obfuscation, if your privacy goal is to protect all people, the enforcement mechanism needs to do some work to find the people to protect, which it may or may not do perfectly. With this mechanism, you don’t need to fully specify everything, and you're not hiding more information than you need to.



Quote for the day:

"Every great leader can take you back to a defining moment when they decided to lead" -- John Paul Warren

Daily Tech Digest - March 29, 2022

How Platform Ops Teams Should Think About API Strategy

Rules and policies that control how APIs can connect with third parties and internally are a critical foundation of modern apps. At a high level, connectivity policies dictate the terms of engagement between APIs and their consumers. At a more granular level, Platform Ops teams need to ensure that APIs can meet service-level agreements and respond to requests quickly across a distributed environment. At the same time, connectivity overlaps with security: API connectivity rules are essential to ensure that data isn’t lost or leaked, business logic is not abused and brute-force account takeover attacks cannot target APIs. This is the domain of the API gateway. Unfortunately, most API gateways are designed primarily for north-south traffic. East-west traffic policies and rules are equally critical because in modern cloud native applications, there’s actually far more east-west traffic among internal APIs and microservices than north-south traffic to and from external customers.


What will it take to stop fraud in the metaverse?

While some fraud in the metaverse can be expected to resemble the scams and tricks of our ‘real-world’ society, other types of fraud must be quickly understood if they are to be mitigated by metaverse makers. When Facebook’s Metaverse first launched, investors rushed to pour billions of dollars into buying acres of land. The so-called ‘virtual real estate’ sparked a land boom which saw $501 million in sales in 2021. This year, that figure is expected to grow to $1 billion. Selling land in the metaverse works like this: pieces of code are partitioned to create individual ‘plots’ within certain metaverse platforms. These are then made available to purchase as NFTs on the blockchain. While we might have laughed when one buyer paid hundreds of thousands of dollars to be Snoop Dogg’s neighbour in the metaverse, this is no laughing matter when it comes to security. Money spent in the metaverse is real, and fraudsters are out to steal it. One of the dangers of the metaverse is that, while the virtual land and property aren’t real, their monetary value is. On purchase, they become real assets linked to your account. Therefore, fraud doesn’t look like it used to.


How IoT data is changing legacy industries – and the world around us

Massive, unstructured IoT data workloads — typically stored at the edge or on-premise — require infrastructure that not only handles big data inflows, but directs traffic to ensure that data gets where it needs to be without disruption or downtime. This is no easy feat when it comes to data sets in the petabyte and exabyte range, but this is the essential challenge: prioritizing the real-time activation of data at scale. By building a foundation that optimizes the capture, migration, and usage of IoT data, these companies can unlock new business models and revenue streams that fundamentally alter their effects on the world around us. ... As legacy companies start to embrace their IoT data, cloud service providers should take notice. Cloud adoption, long understood to be a priority among businesses looking to better understand their consumers, will become increasingly central to the transformation of traditional companies. The cloud and the services delivered around it will serve as a highway for manufacturers or utilities to move, activate, and monetize exabytes of data that are critical to businesses across industries. 


The security gaps that can be exposed by cybersecurity asset management

There is a plethora of tools being used to secure assets, including desktops, laptops, servers, virtual machines, smartphones, and cloud instances. But despite this, companies can struggle to identify which of their assets are missing the relevant endpoint protection platform/endpoint detection and response (EPP/EDR) agent defined by their security policy. They may have the correct agent but fail to understand why its functionality has been disabled, or they are using out-of-date versions of the agent. The importance of understanding which assets are missing the proper security tool coverage and which are missing the tools’ functionality cannot be underestimated. If a company invests in security and then suffers a malware attack because it has failed to deploy the endpoint agent, it is a waste of valuable resources. Agent health and cyber hygiene depends on knowing which assets are not protected, and this can be challenging. The admin console of an EPP/EDR can provide information about which assets have had the agent installed, but it does not necessarily prove that the agent is performing as it should.


Google AI and UC Berkely Researchers Introduce A Deep Learning Approach Called ‘PRIME’

PRIME develops a robust prediction model that isn’t easily tricked by adversarial cases to overcome this restriction. To architect simulators, this model is simply optimized using any standard optimizer. More crucially, unlike previous methods, PRIME can learn what not to construct by utilizing existing datasets of infeasible accelerators. This is accomplished by supplementing the learned model’s supervised training with extra loss terms that particularly punish the learned model’s value on infeasible accelerator designs and adversarial cases during training. This method is similar to adversarial training. One of the main advantages of a data-driven approach is that it enables learning highly expressive and generalist optimization objective models that generalize across target applications. Furthermore, these models have the potential to be effective for new applications for which a designer has never attempted to optimize accelerators. The trained model was altered to be conditioned on a context vector that identifies a certain neural net application desire to accelerate to train PRIME to generalize to unseen applications.


Use zero trust to fight network technical debt

In a ZT environment, the network not only doesn’t trust a node new to it, but it also doesn’t trust nodes that are already communicating across it. When a node is first seen by a ZT network, the network will require that the node go through some form of authentication and authorization check. Does it have a valid certificate to prove its identity? Is it allowed to be connected where it is based on that identity? Is it running valid software versions, defensive tools, etc.? It must clear that hurdle before being allowed to communicate across the network. In addition, the ZT network does not assume that a trust relationship is permanent or context free: Once it is on the network, a node must be authenticated and authorized for every network operation it attempts. After all, it may have been compromised between one operation and the next, or it may have begun acting aberrantly and had its authorizations stripped in the preceding moments, or the user on that machine may have been fired.


IT professionals wary of government campaign to limit end-to-end encryption

Many industry experts said they were worried about the possibility of increased surveillance from governments, police and the technology companies that run the online platforms. Other concerns were around the protection of financial data from hackers if end-to-end encryption was undermined. There were concerns that wider sharing of “secret keys”, or centralised management of encryption processes, would significantly increase the risk of compromising the confidentiality they are meant to preserve. BCS’s Mitchell said: “It’s odd that so much focus has been on a magical backdoor when other investigative tools aren’t being talked about. Alternatives should be looked at before limiting the basic security that underpins everyone’s privacy and global free speech.” Government and intelligence officials are advocating, among other ways of monitoring encrypted material, technology known as client-side scanning (CSS) that is capable of analysing text messages on phone handsets and computers before they are sent by the user.


Hypernet Labs Scales Identity Verification and NFT Minting

A majority of popular NFT projects so far have been focused on profile pictures and art projects, where early adopters have shown a willingness to jump through hoops and bear the burden of high transaction fees on the Ethereum Network. There’s growing enthusiasm for NFTs that serve more utilitarian purposes, like unlocking bonus content for subscription services or as a unique token to allow access to experiences and events. With the release of Hypernet.Mint, Hypernet Labs is taking the same approach toward simplifying the user experience that it applied to Hypernet.ID. Hypernet.Mint offers lower-cost deployment by leveraging Layer 2 blockchains like Polygon and Avalanche that don’t have the same high fee structure as the Ethereum mainnet. The company also helps dApps create a minting strategy that aligns with business goals, supporting either mass minting or minting that is based on user onboarding flows that may acquire additional users over time. “We’re working on a lot of onboarding flow for new types of users, which comes back to ease of use for users,” Ravlich said.


How decision intelligence is helping organisations drive value from collected data

While AI can be a somewhat nebulous concept, decision intelligence is more concrete. That’s because DI is outcome-focused: a decision intelligence solution must deliver a tangible return on investment before it can be classified as DI. A model for better stock management that gathers dust on a data scientist’s computer isn’t DI. A fully productionised model that enables a warehouse team to navigate the pick face efficiently and decisively, saving time and capital expense — that’s decision intelligence. Since DI is outcome focused, it requires models to be built with an objective in mind and so addresses many of the pain points for businesses that are currently struggling to quantify value from their AI strategy. By working backwards from an objective, businesses can build needed solutions and unlock value from AI quicker. ... Global companies, including Pepsico, KFC and ASOS have already emerged as early adopters of DI, using it to increase profitability and sustainability, reduce capital requirements, and optimise business operations.


Insights into the Emerging Prevalence of Software Vulnerabilities

Software quality is not always an indicator of secure software. A measure of secure software is the number of vulnerabilities uncovered during testing and after production deployment. Software vulnerabilities are a sub-category of software bugs that threat actors often exploit to gain unauthorized access or perform unauthorized actions on a computer system. Authorized users also exploit software vulnerabilities, sometimes with malicious intent, targeting one or more vulnerabilities known to exist on an unpatched system. These users can also unintentionally exploit software vulnerabilities by inputting data that is not validated correctly, subsequently compromising its integrity and the reliability of those functions that use the data. Vulnerability exploits target one or more of the three security pillars; Confidentiality, Integrity, or Availability, commonly referred to as the CIA Triad. Confidentiality entails protecting data from unauthorized disclosure; Integrity entails protecting data from unauthorized modification and facilitates data authenticy.



Quote for the day:

"To be a good leader, you don't have to know what you're doing; you just have to act like you know what you're doing." -- Jordan Carl Curtis

Daily Tech Digest - March 28, 2022

Scientists Work To Turn Noise on Quantum Computers to Their Advantage

“We know very little about quantum computers and noise, but we know really well how this molecule behaves when excited,” said Hu. “So we use quantum computers, which we don’t know much about, to mimic a molecule which we are familiar with, and we see how it behaves. With those familiar patterns we can draw some understanding.” This operation gives a more ‘bird’s-eye’ view of the noise that quantum computers simulate, said Scott Smart, a Ph.D. student at the University of Chicago and first author on the paper. The authors hope this information can help researchers as they think about how to design new ways to correct for noise. It could even suggest ways that noise could be useful, Mazziotti said. For example, if you’re trying to simulate a quantum system such as a molecule in the real world, you know it will be experiencing noise—because noise exists in the real world. Under the previous approach, you use computational power to add a simulation of that noise. “But instead of building noise in as additional operation on a quantum computer, maybe we could actually use the noise intrinsic to a quantum computer to mimic the noise in a quantum problem that is difficult to solve on a conventional computer,” Mazziotti said.


How to Bring Shadow Kubernetes IT into the Light

Running container-based applications in production goes well beyond Kubernetes. For example, IT operations teams often require additional services for tracing, logs, storage, security and networking. They may also require different management tools for Kubernetes distribution and compute instances across public clouds, on-premises, hybrid architectures or at the edge. Integrating these tools and services for a specific Kubernetes cluster requires that each tool or service is configured according to that cluster’s use case. The requirements and budgets for each cluster are likely to vary significantly, meaning that updating or creating a new cluster configuration will differ based on the cluster and the environment. As Kubernetes adoption matures and expands, there will be a direct conflict between admins, who want to lessen the growing complexity of cluster management, and application teams, who seek to tailor Kubernetes infrastructure to meet their specific needs. What magnifies these challenges even further is the pressure of meeting internal project deadlines — and the perceived need to use more cloud-based services to get the work done on time and within budget.


Managing the complexity of cloud strategies

Both polycloud and sky computing are strategies for managing the complexities of a multicloud deployment. Which model is better? Polycloud is best at leveraging the strengths of each individual cloud provider. Because each cloud provider is chosen based on its strength in a particular cloud specialty, you get the best of each provider in your applications. This also encourages a deeper integration with the cloud tools and capabilities that each provider offers. Deeper integration means better cloud utilization, and more efficient applications. Polycloud comes at a cost, however. The organization as a whole, and each development and operations person within the organization, need deeper knowledge about each cloud provider that is in use. Because an application uses specialized services from multiple providers, the application developers need to understand the tools and capabilities of all of the cloud providers. Sky computing relieves this knowledge burden on application developers. Most developers in the organization need to know and understand only the sky API and the associated tooling and processes.


US, EU Agree to a New Data-Sharing Framework

The Biden administration and the European Commission said in a joint statement issued on Friday that the new framework "marks an unprecedented commitment on the U.S. side to implement reforms that will strengthen the privacy and civil liberties protections applicable to U.S. signals intelligence activities." Signals intelligence involves the interception of electronic signals/systems used by foreign targets. In the new framework, the U.S. reportedly will apply new "safeguards" to ensure signals surveillance activities "are necessary and proportionate in the pursuit of defined national security objectives," the statement says. It also will establish a two-level "independent redress mechanism" with binding authority, which it said will "direct remedial measures, and enhance rigorous and layered oversight of signals intelligence activities." The efforts, the statement says, places limitations on surveillance. Officials said the framework reflects more than a year of negotiations between U.S. Secretary of Commerce Gina Raimondo and EU Commissioner for Justice Didier Reynders.


Google's tightening key security on Android with a longer (but better) chain of trust

There's a software key stored on basically every Android phone, inside a secure element and separated from your own data — separately from Android itself, even. The bits required for that key are provided by the device manufacturer when the phone is made, signed by a root key that's provided by Google. In more practical terms, apps that need to do something sensitive can prove that the bundled secure hardware environment can be trusted, and this is the basis on which a larger chain of love trust can be built, allowing things like biometric data, user data, and secure operations of all kind to be stored or transmitted safely. Previously, Android devices that wanted to enjoy this process needed to have that key securely installed at the factory, but Google is changing from in-factory private key provisioning to in-factory public key extraction with over-the-air certificate provisioning, paired with short-lived certificates. As even the description makes it sound, this new change is a more complicated system, but it fixes a lot of issues in practice. 


How Do I Demonstrate the ROI of My Security Program?

The first is to change the perception of security’s role as the “office of NO.” Security programs need to embrace that their role is to ENABLE the business to take RISKS, and not to eliminate risks. For example, if a company needs to set up operations in a high-risk country, with risky cyber laws or operators, the knee jerk reaction of most security teams is to say “no.” In reality, the job of the security team is to enable the company to take that risk by building sound security programs that can identify, detect, and respond to cybersecurity threats. When company leaders see security teams trying to help them achieve their business goals, they are better able to see the value of a strong cybersecurity program. Similarly, cybersecurity teams must understand their company’s business goals and align security initiatives accordingly. Too many security teams try to push their security initiatives as priorities for the business, when, in fact, those initiatives may be business negatives.


Extended Threat Intelligence: A new approach to old school threat intelligence

One of the challenges of being a security leader is making the most informed decision to choose from a diverse pool of technologies to prevent data breaches. As the trend of consolidation in cybersecurity is accelerating, solutions that provide similar results but are listed under different market definitions make the job harder. Meanwhile, security practitioners grapple with a multitude of technologies that generate alerts from various vendors, eventually causing loss of productivity and complexity. The importance of the integration of artificial intelligence with the cyber security sector should be underlined at this point. A smart combination of AI-powered automation technology and a CTIA team can increase productivity while turning a large alert stream into a massive number of events. ... Digital Risk Protection (DRPS) and Cyber Threat Intelligence (CTI) take to the stage of course. Again, to give an example by using auto-discovered digital assets including brand keywords, unified DRPS and CTI technology start collecting and analyzing data across the surface, deep, and dark web to be processed and analyzed in real-time.


Large-Scale, Available Graphene Supercapacitors; How Close are We?

One issue with supercapacitors so far has been their low energy density. Batteries, on the other hand, have been widely used in consumer electronics. However, after a few charge/discharge cycles, they wear out and have safety issues, such as overheating and explosions. Hence, scientists started working on coupling supercapacitors and batteries as hybrid energy storage systems. For example, Prof. Roland Fischer and a team of researchers from the Technical University Munich have recently developed a highly efficient graphene hybrid supercapacitor. It consists of graphene as the electrostatic electrode and metal-organic framework (MOF) as the electrochemical electrode. The device can deliver a power density of up to 16 kW/kg and an energy density of up to 73 Wh/kg, comparable to several commercial devices such as Pb-acid batteries and nickel metal hydride batteries. Moreover, the standard batteries (such as lithium) have a useful life of around 5000 cycles. However, this new hybrid graphene supercapacitor retains 88% of its capacity even after 10,000 cycles.


3 reasons user experience matters to your digital transformation strategy

Simply put, a strong UX makes it easier for people to follow the rules. You can “best practice” employees all day long, but if those practices get in the way of day-to-day responsibilities, what’s the point of having them? Security should be baked into all systems from the get-go, not treated as an afterthought. And when it’s working well, people shouldn’t even know it’s there. Don’t make signing into different systems so complicated or time-consuming that people resort to keeping a list of passwords next to their computer. Automating security measures as much as possible is the surest way to stay protected while putting UX at the forefront. By doing this, people will have access to the systems they need and be prohibited from those that they don’t for the duration of their employment – not a minute longer or shorter. Automation also enables organizations to understand what is normal vs. anomalous behavior so they can spot problems before they get worse. For business leaders who really want to move the needle, UX should be just as important as CX. Employees may not be as vocal as customers about what needs improvement, but it’s critical information.


Automation Is No Silver Bullet: 3 Keys for Scaling Success

Many organizations think automation is an easy way to enter the market. Although it’s a starting point, automated testing warrants prioritization. Automated testing doesn’t just speed up QA processes, but also speeds up internal processes. Maintenance is also an area that benefits from automation with intelligent suggestions and searches. Ongoing feedback needs to improve user expectations. It’s a must-have for agile continuous integration and continuous delivery cycles. Plus, adopting automated testing ensures more confidence in releases and lower risks of failures. That means less stress and happier times for developers. That is increasingly important given the current shortage of developers amid the great reshuffle. Automated testing can help fight burnout and sustain a team of developers who make beautiful and high-quality applications. Some of the benefits of test automation include the reduction of bugs and security in final products, which increases the value of software delivered.



Quote for the day:

"Leadership is about carrying on when everyone else has given up" -- Gordon Tredgold

Daily Tech Digest - March 27, 2022

Chasing The Myth: Why Achieving Artificial General Intelligence May Be A Pipe Dream

Often, people confuse AGI with AI, which is loosely used nowadays by marketers and businesses to describe run-of-the-mill machine learning applications and even normal automation tools. In simple words, Artificial General Intelligence involves an ever-growing umbrella of abilities of machines to perform various tasks significantly better than even the brightest of human minds. An example of this could be AI accurately predicting stock market trends to allow investors to rake in profits consistently. Additionally, AGI-based tools can interact with humans conversationally and casually. In recent times, domotic applications such as smart speakers, smart kitchens and smartphones are gradually becoming more interactive as they can be controlled with voice commands. Additionally, advanced, updated versions of such applications show distinctly human traits such as humor, empathy and friendliness. However, such applications just stop short of having genuinely authentic interactions with humans. The prospective future arrival of AGI, if it happens, will plug this gap.


JavaScript Framework Unpoly and the HTML Over-the-Wire Trend

JavaScript is the most popular programming language in the world, and React is one of its leading libraries. Initially released in 2013, React was designed to be a library for helping developers craft user interfaces (UIs). According to Henning Koch, React and Unpoly aren’t entirely opposites. They share some likenesses, but there are a few important distinctions. “What both frameworks share is that they render a full page when the user navigates, but then only fragments of that new page are inserted into the DOM, with the rest being discarded,” he explained. “However, while a React app would usually call a JSON API over the network and render HTML in the browser, Unpoly renders HTML on the server, where we have synchronous access to our data and free choice of programming language.” Still, Koch acknowledges there are some instances where React and SPA’s are suitable choices. He went on to say, “There are still some cases where a SPA approach shines. For instance, we recently built a live chat where messages needed to be end-to-end encrypted.


Researchers make a quantum storage breakthrough by storing a qubit for 20 milliseconds

The new 20-millisecond milestone, however, could be just the breakthrough Afzelius' team was looking for. "This is a world record for a quantum memory based on a solid-state system, in this case a crystal. We have even managed to reach the 100 millisecond mark with a small loss of fidelity," Azfelius said. For their experiments, the researchers kept their crystals at temperatures of -273,15°C so as not to disturb the effect of entanglement. "We applied a small magnetic field of one thousandth of a Tesla to the crystal and used dynamic decoupling methods, which consist in sending intense radio frequencies to the crystal," said Antonio Ortu, a post-doctoral fellow in the Department of Applied Physics at UNIGE. "The effect of these techniques is to decouple the rare-earth ions from perturbations of the environment and increase the storage performance we have known until now by almost a factor of 40," he added. The result of this experiment could allow for the development of long-distance quantum telecommunications networks, though the researchers would still have to extend the storage time further.


3 Tips to Take Advantage of the Future Web 3.0 Decentralized Infrastructure

There's been a lot of talk of innovation and helping the little guy through blockchain. But, huge resources and backing are needed in order to sustain a project and take it mainstream on a longer time horizon. Even with a brilliant technical team, excellent developers and a well-thought-out whitepaper and tokenomics ecosystem, the project won’t go anywhere. Unless, it's marketed on major outlets and pushed towards consumers consistently. This is an attention-based economy and it takes effort to capture mainstream attention and to keep it. Moreover, it will take a great deal of finance to develop VR technology that is high-quality, integrated into the many metaverses, cost-effective and marketed well. A small team might be able to conjure up a good initial project. But, they will likely need to partner up or hand off the project so it can become mainstream. Always assess who a project is affiliated with and what partnerships they have. This is a strong indication of how much they value their own project and also offers numerous other benefits for various scenarios.


A diffractive neural network that can be flexibly programmed

In initial evaluations, the diffractive neural network introduced by this team of researchers achieved very promising results, as it was found to be highly flexible and applicable across a wide range of scenarios. In the future, it could thus be used to solve a variety of real-world problems, including image classification, wave sensing and wireless communication coding/decoding. Meanwhile, Cui and his colleagues will work on improving its performance further. "The prototype implemented in this work is based on a 5-layer diffractive neural network, each layer has 64 programmable neural networks, and the total number of nodes in the network is relatively low," Cui added. "At the same time, the operating frequency band of this network is lower, resulting in a larger size of the physical network. In our next studies, we plan to further increase the scale of the programmable neurons of the network, improve the network integration, reduce the size and form a set of intelligent computers with stronger computing power and more practicality for sensing and communications."


Microsoft Azure Developers Awash in PII-Stealing npm Packages

In this case, the cyberattackers were pretending to offer a key set of existing, legitimate packages for Azure. “It became apparent that this was a targeted attack against the entire @azure npm scope, by an attacker that employed an automatic script to create accounts and upload malicious packages that cover the entirety of that scope,” researchers said in a Wednesday posting. “The attacker simply creates a new (malicious) package with the same name as an existing @azure scope package, but drops the scope name.” Npm scopes are a way of grouping related packages together. JFrog found that besides the @azure scope, other popular package groups were also targeted, including @azure-rest, @azure-tests, @azure-tools and @cadl-lang. The researchers added, “The attacker is relying on the fact that some developers may erroneously omit the @azure prefix when installing a package. For example, running npm install core-tracing by mistake, instead of the correct command – npm install @azure/core-tracing.” The attacker also tried to hide the fact that all of the malicious packages were uploaded by the same author, “by creating a unique user per each malicious package uploaded,” according to JFrog.


An Introduction to Mathematical Thinking for Data Science

Mathematical thinking is closely tied to what many mathematicians call mathematical maturity. In the words of UC Berkeley professor Anant Sahai, mathematical maturity refers to “comfort in solving problems step by step and maintaining confidence in your work even as you take steps forward.” In most mathematical problems, the solution is not immediately clear. A mathematically mature person finds it reasonable — and even satisfying — to make incremental progress and eventually reach a solution, even if they have no idea what it might be when they first begin. ... the ability to think mathematically will give you deeper insight into the intricacies of the data and how the problem is actually being solved. You might find yourself logically rearranging a program to make it more efficient, or recommending a specific data collection technique to obtain a sample which matches existing statistical methods. In doing so, you’ll expand your repertoire and thus contribute to a better data science workflow.


Schwinger effect seen in graphene

In theory, a vacuum is devoid of matter. In the presence of strong electric or magnetic fields, however, this void can break down, causing elementary particles to spring into existence. Usually, this breakdown only occurs during intense astrophysical events, but researchers at the UK’s National Graphene Institute at the University of Manchester have now brought it into tabletop territory for the first time, observing this so-called Schwinger effect in a device based on graphene superlattices. The work will be important for developing electronic devices based on graphene and other two-dimensional quantum materials. In graphene, which is a two-dimensional sheet of carbon atoms, a vacuum exists at the point (in momentum space) where the material’s conduction and valence electron bands meet and no intrinsic charge carriers are present. Working with colleagues in Spain, the US, Japan and elsewhere in the UK, the Manchester team led by Andre Geim identified a signature of the Schwinger effect at this Dirac point, observing pairs of electrons and holes created out of the vacuum.


The risk of undermanaged open source software

Some risks are the same regardless of whether solutions are built with vendor-curated or upstream software; however it is the responsibility for maintenance and security of the code that changes. Let’s make some assumptions about a typical organization. That organization is able to identify where all of its open source comes from, and 85% of that is from a major vendor it works with regularly. The other 15% consists of offerings not available from the vendor of choice and comes directly from upstream projects. For the 85% that comes from a vendor, any security concerns, security metadata, announcements and, most importantly, security patches, come from that vendor. In this scenario, the organization has one place to get all of the needed security information and updates. The organization doesn’t have to monitor the upstream code for any newly discovered vulnerabilities and, essentially, only needs to monitor the vendor and apply any patches it provides.


Confessions of a Low-Code Convert

A lot of programmers hear “low-code tools” and get twitchy. But the reality, especially if you are building processes versus tools, is that low-code solutions don’t prevent me from being creative or effective; they enable it. They handle tedious, labor-intensive boilerplate items and free me up to write the lines of JavaScript I actually need to uniquely express a business problem. And there are still plenty of places where you need to (and get to!) write that clever bit of code to implement a unique business requirement. It’s much easier to fix or refactor an app written by a low-code citizen developer in the line of business than it is to decipher whatever madness they’ve slapped together in their massive, mission-critical Excel spreadsheet. I find low-code platforms incredibly sanity-saving. They reduce noise in the system and obviate a lot of the admittedly unexciting elements of my work. The technology landscape has changed dramatically. Cloud adoption has introduced a world of serverless containerization.



Quote for the day:

"Leadership - leadership is about taking responsibility, not making excuses." -- Mitt Romney

Daily Tech Digest - March 22, 2022

When did Data Science Become Synonymous with Machine Learning?

Many folks just getting started with data science have an illusory idea of the field as a breeding ground where state-of-the-art machine learning algorithms are produced day after day, hour after hour, second after second. While it is true that getting to push out cool machine learning models is part of the work, it’s far from the only thing you’ll be doing as a data scientist.In reality, data science involves quite a bit of not-so-shiny grunt work to even make the available data corpus suitable for analysis. According to a Twitter poll conducted in 2019 by data scientist Vicki Boykis, fewer than 5% of respondents claimed to spend the majority of their time on ML models [1]. The largest percentage of data scientists said that most of their time was spent cleaning up the data to make it usable. ... Data science is a burgeoning field, and reducing it down to one concept is a misrepresentation which is at best false, and at worse dangerous. To excel in the field as a whole, it’s necessary to remove the pop-culture tunnel vision that seems to only notice machine learning. 


NaaS adoption will thrive despite migration challenges

The pandemic has also played a significant role in spurring NaaS adoption, Chambers says. "During the early days of COVID-19 there was a rapid push for users to be able to connect quickly, reliably, and securely from anywhere at any time," he says. "This required many companies to make hardware/software purchases and rapid implementations that accelerated an already noticeable increase in overall network complexity over the last several years." Unfortunately, many organizations faced serious challenges while trying to keep pace with suddenly essential changes. "Companies that need to quickly scale up or down their network infrastructure capabilities, or those that are on the cusp of major IT infrastructure lifecycle activity, have become prime NaaS-adoption candidates," Chambers says. It’s easiest for organizations to adopt small-scale NaaS offerings to gain an understanding of how to evaluate potential risk and rewards and determine overall alignment to their organization’s requirements.


Securing DevOps amid digital transformation

The process of requesting a certificate from a CA, receiving it, manually binding it to an endpoint, and self-managing it can be slow and lack visibility. Sometimes, DevOps teams avoid established quality practices by using less secure means of cryptography or issuing their own certificates from a self-created non-compliant PKI environment – putting their organizations at risk. However, PKI certificates from certified and accredited globally trusted CAs offer the best way for engineers to ensure security, identity and compliance of their containers and the code stored within them. A certificate management platform, which is built to scale and manages large volumes of PKI certificates, is perfect for the DevOps ethos and their environments. Organizations can now automate the request and installation of compliant certificates within continuous integration/continuous deployment (CI/CD) pipelines and applications to secure DevOps practices and support digital transformation. Outsourcing your PKI to a CA means developers have a single source to turn to for all certificate needs and are free to focus on core competencies. 


Reprogramming banking infrastructure to deliver innovation at speed

Fintech firms typically apply digital technology to processes those legacy institutions find difficult, time consuming, or costly to undertake, and they often focus on getting a single use case like payments, or alternative lending right. In contrast, neo banks, or challenger banks, deliver their services primarily through phone apps that often aim to do many things that a bank can do, including lending money and accepting deposits. A key advantage for both is that they don’t have to spend time, money, and organisational capital to transform into something new. They were born digital. Likewise, they both claim convenience as their prime value proposition. However, while customers want convenience, many still see banking as a high-touch service. If their bank has survived decades of consolidation and has served a family for generations, familiarity can be a bigger draw than convenience. That said, the COVID-19 pandemic has accelerated the online trend. More and more of us auto-pay our bills and buy our goods as well as our entertainment and services via e-commerce. 


No free lunch theorem in Quantum Computing

The no free lunch theorem entails that a machine learning algorithm’s average performance is dependent on the amount of data it has. “Industry-built quantum computers of modest size are now publicly accessible over the cloud. This raises the intriguing possibility of quantum-assisted machine learning, a paradigm that researchers suspect could be more powerful than traditional machine learning. Various architectures for quantum neural networks (QNNs) have been proposed and implemented. Some important results for quantum learning theory have already been obtained, particularly regarding the trainability and expressibility of QNNs for variational quantum algorithms. However, the scalability of QNNs (to scales that are classically inaccessible) remains an interesting open question,” the authors write. This also suggests a possibility that in order to model a quantum system, the amount of training data might also need to grow exponentially. This threatens to eliminate the edge quantum computing has over edge computing. The authors have discovered a method to eliminate the potential overhead via a newfound quantum version of the no free lunch theorem.


IT Talent Shortage: How to Put AI Scouting Systems to Work

The most likely people to leave a company are highly skilled employees who are in high demand (e.g., IT). Employees who feel they are underutilized and who want to advance their careers, and employees who are looking for work that can more easily balance with their personal lives, are also more likely to leave. It’s also common knowledge that IT employees change jobs often, and that IT departments don't do a great job retaining them for the long haul. HR AI can help prevent attrition if you provide it with internal employee and departmental data that it can assess your employees, their talents and their needs based upon the search criteria that you give it. For instance, you can build a corporate employee database that goes beyond IT, and that lists all the relevant skills and work experiences that employees across a broad spectrum of the company possess. Using this method, you might identify an employee who is working in accounting, but who has an IT background, enjoys data analytics, and wants to explore a career change. Or you could identify a junior member of IT who is a strong communicator and can connect with end users in the business. 


Automation and digital transformation: 3 ways they go together

All sorts of automation gets devised and implemented for specific purposes or tasks, sometimes for refreshingly simple reasons, like “Automating this makes our system more resilient, and automating this makes my job better.” This is the type of step-by-step automation long done by sysadmins and other operations-focused IT pros; it’s also common in DevOps and site reliability engineering (SRE) roles. IT automation happens for perfectly good reasons on its own, and it has now spread deep and wide in most, if not all, of the traditional branches of the IT family tree: development, operations, security, testing/QA, data management and analytics – you get the idea. None of this needs to be tethered to a digital transformation initiative; the benefits of a finely tuned CI/CD pipeline or security automation can be both the means and the end. There’s no such thing as digital transformation without automation, however. This claim may involve some slight exaggeration, and reasonable people can disagree. But digital transformation of the ambitious sort that most Fortune 500 boardrooms are now deeply invested in requires (among other things) a massive technology lever to accomplish, and that lever is automation.


The best way to lead in uncertain times may be to throw out the playbook

Organizations also used the sensing-responding-adapting model to combat misinformation and confusion about masks and vaccines. With conflicting guidance from the Centers for Disease Control and Prevention (CDC) in the US and the World Health Organization (WHO), one organization we studied opted for “full transparency” with a “fully digital” solution. The company built an app that included data from sources the company considered reliable, and it updated policies, outlined precautions, and offered ways to report vaccination status. The app turbocharged the company’s sense-respond-adapt capabilities by getting quality information in everyone’s hands and opening a new channel for regular two-way communication. There was no waiting for an “all hands” meeting to get meaningful questions and feedback. Reflecting on the results of the study, one takeaway became clear: it’s worthwhile for leaders of any team to absorb the lessons of sense-respond-adapt, even if there is no emergency at hand. Here are three ways to employ each step of the model.


The Three Building Blocks of Data Science

Data is worthless without the context for understanding it properly — context which can only be obtained by a domain expert: someone who understands the field where the data stems from and can thus provide the perspectives needed to interpret it correctly. Let’s consider a toy example to illustrate this. Imagine we collect data from a bunch of different golf games from recent years of the PGA Tour. We obtain all the data, we process and organize it, we analyze it, and we confidently publish our findings, having triple-checked all our formulas and computations. And then, we become laughingstocks of the media. Why? Well, since none of us has ever actually played golf, we didn’t realize that lower scores correspond to a better performance. As a result, all our analyses were based on the reverse, and therefore incorrect. This is obviously an exaggeration, but it gets the point across. Data only makes sense in context, and so it is essential to consult with a domain expert before attempting to draw any conclusions.


Surprise! The metaverse could be great news for the enterprise edge

Metaverse latency control is more than just edge computing, it’s also edge connectivity, meaning consumer broadband. Faster broadband offers lower latency, but there’s more to latency control than just speed. You need to minimize the handling, the number of hops or devices between the user who’s pushing an avatar around a metaverse, and the software that understands what that means to what the user “sees” and what others see as well. Think fiber and cable TV, and a fast path between the user and the nearest edge, which is likely to be in a nearby major metro area. And think “everywhere” because, while the metaverse may be nowhere in a strict reality sense, it’s everywhere that social-media humans are, which is everywhere. Low latency, high-speed, universal consumer broadband? All the potential ad revenue for the metaverse is suddenly targeting that goal. As it’s achieved, the average social-media junkie could well end up with 50 or 100 Mbps or even a gigabit of low-latency bandwidth. There are corporate headquarters who don’t have it that good. 



Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr