Daily Tech Digest - November 30, 2021

Alation: How to develop a data governance framework

According to Alation, there are seven key steps to building a successful data governance framework: Establish a mission and vision and create a set of policies, standards and glossaries; Populate a data catalog with metadata that shows data lineage and analyze that metadata to discover what data is most popular and who are the top users of data; Recognize and assign data stewards and empower those stewards to govern the organization's data; Curate data assets by describing different data sets and applying quality flags to the data sets so users can easily find the data they'll find most useful; Apply policies and controls so that not all data can be accessed by everyone within an organization and organizations can remain compliant with applicable regulations; Drive community and collaboration to promote trusted data use; and Monitor and measure the entire data governance framework to determine policy conformance, create curation analysis, measure the usage and creation of data assets, and determine the quality of data. "If you're going to really do a process, you need to find out where the gaps are and where you need to make course corrections," Myles Suer

Get to Know EF Core 6

EF Core 6.0 is a modern, cloud-native-friendly data access API that supports multiple backends. Get up and running with a document-based Azure Cosmos DB container using only a few lines of code, or use your LINQ query skills to extract the data you need from relational databases like SQL Server, MySQL, and PostgreSQL. EF Core is a cross-platform solution that runs on mobile devices, works with data-binding in client WinForms and WPF apps “out of the box”, and can even run inside your browser! Did you know it is possible to embed a SQLite database in a web app and use Blazor WebAssembly to access it using EF Core? Check out Steven Sanderson’s excellent video to see how this is possible. Combined with the power of .NET 6, EF Core 6.0 delivers major performance improvements and even scored 92% better on the industry standard Tech Empower Fortunes benchmark compared to EF Core 5.0 running on .NET 5. The EF Core team and global OSS community have built and published many resources to help you get started.

Digital transformation: 4 questions CIOs should ask now

The heart of transformation is really the culture, and that requires commitment from the entire leadership team. In 2016, CarMax began a significant transformation effort. “We wanted to make sure that our organization – not just technology, our entire organization – was ready for this change. And that required a significant commitment not only from me, but also from the CEO. The two other partners I worked closely with were the chief marketing officer and the chief operations officer. So between the leadership team support, we were able to demonstrate and articulate to the whole company that this is a change for the entire company, not just a technology initiative,” says Mohammad. “We put cross-functional teams together to show that we’re serious about this change and we do have to transform ourselves to this digital way of working,” he says. "It is difficult to change the fabric and core operating system of an organization, so the leadership team needs to be there all the way. It is a journey that never ends, so the support cannot subside. The support has to be there all the time.

Why Machine Learning Engineers are Replacing Data Scientists

Not many people actually talk about ML engineering — at least compared to the amount of people talking about data science — and yet I believe the demand for ML engineers might surpass the one for data scientists. We can see the amount of data scientists soaring all over the world, within companies of all sizes, while most of these people aren’t actually doing data science at all, just analytics. And many of those who are actually doing data science probably didn’t have to. That means many organisations are hiring people to solve basically the same types of problems, over and over again, in parallel. There is just a lot of redundancy, and the quality of the people doing it varies substantially. At the same time, we see companies like Google and Amazon, who have some of the best data scientists in the world, working on “ready-to-use” ML systems on their cloud platforms (GCP and AWS, respectively). This means you can plug your data into their systems to benefit from all that knowledge, and all you need is someone who knows how to make that connection and the necessary tuning--someone like an ML engineer.

How to combat ransomware with visibility

The recovery process is often the last thing anyone thinks about. Disaster recovery and business continuity (DRBC) is probably the toughest piece to solve and, often, the most ignored. But if your organization is in healthcare or part of critical infrastructure like utilities, there can be life-and-death consequences to service interruptions. Ensuring business continuity might mean the ability to keep working to save lives, which means that immediate time-to-recovery is going to be very important. In the past, we used to have to go and pull tapes from an archive at some off-site place to restore systems—and that could take days. A few years ago, many businesses had backup systems inside a hosted data center, allowing them to restore from another server by replicating data across the pipe. That was a lot quicker than tape backups, but it still had limitations. Today, cloud-hosted solutions make things much easier because they take snapshots in time of your data. For this reason, cloud storage makes DRBC much faster than legacy solutions that are still stuck in a physical-servers-and-appliances frame of mind.

Lessons Learned from Self-Selection Reteaming at Redgate

At Redgate we believe the best way to make software products is by engaging small teams empowered with clear purpose, freedom to act and a drive to learn. We believe this because we’ve seen it; teams who have had laser-like focus on an aim, decision-making authority and the space to get better at what they do, were the most engaged and effective. If you have read Dan Pink’s seminal book Drive: The Surprising Truth About What Motivates Us, you might recognise that our beliefs echo what the author demonstrates are key to an engaged and motivated workforce — autonomy, mastery and purpose. To remain true to our beliefs, Redgate needs to ensure that the goals and expectations of our teams are crystal clear, that we push authority to teams as much as we can, and we encourage people to grow. We also recognise that different people have different ambitions, preferences for work and views on what counts as personal development. We have a large portfolio of products, written in a variety of languages, structured in a variety of ways and that exist at various stages of the product life cycle.

Global Tech Policy Briefing for November 2021: Banking, Broadband, & Big Tech

It’s not a secret that cryptocurrencies make central banks nervous: Bitcoin and the like exist to flout regulation and control. So far, few national governments have dared ban cryptocurrency outright; we’ve seen a few years of cold war, with the American Securities and Exchange Commission publicly sniping at Terraform Labs, for example, and many governments mulling a ban on mixers. Turkey, whose lira is in freefall, is moving toward an outright ban. Nigeria attempted a ban, but remains the second-largest Bitcoin market in the world. In Russia, the Kremlin’s stance is ambiguous, as rumors of a CryptoRuble make the rounds. But China is the only major international power to successfully outlaw crypto transactions by its citizens, full stop. Now, India may be joining them. A new bill, the Cryptocurrency and Regulation of Official Digital Currency Bill, 2021, “seeks to prohibit all private cryptocurrencies in India, however, it allows for certain exceptions to promote the underlying technology of cryptocurrency and its uses.”

The future of hyperautomation in 2022

Hyperautomation has come to prominence as a trend for 2021 thanks to the maturity of digitalisation and of data management tools. The aforementioned digital upskilling that many organisations have committed to during the pandemic, combined with these tools, form the basis for hyperautomation to take place in the right environment. But we’re only at the start of a long journey. The digital recreation of your business, warts and all, can prove a gruelling exercise in self-examination given the speed at which systems are recreated, and resultant insights are amalgamated. Companies will need to invest a lot of time and energy in order to create long-term adoption of hyperautomation. Turning theory into action is a big challenge to take on, and preparation is key. That means that the value of hyperautomation will only start to materialise for the pioneers that stay focused. Organisations need to stay on the ball and avoid slipping back into old, stagnant processes driven by more operational, tactical initiatives.

Sneaky New Magecart Malware Hides in Cron Jobs

Dubbed CronRAT, it hides in the Linux calendar subsystem as a task that has a nonexistent date, such as Feb. 31. The malware remains undetected by the security vendor and enables server-side Magecart data theft that bypasses browser-based security solutions, according to researchers at Dutch security firm Sansec. "This is very concerning, having been discovered just after Black Friday and Cyber Monday, as well as before the upcoming busy Christmas shopping period, where many unsuspecting shoppers will likely move to online shopping due to the new variant of COVID-19, which may result in further restrictions limiting in-person shopping," says Joseph Carson, chief security scientist and advisory CISO at enterprise security firm ThycoticCentrify. So far, Sansec has not directly tied this recently uncovered RAT to one particular Magecart group. And while it’s not clear who exactly is behind this malware, the report notes that its operators have created an unusual and sophisticated threat that is packed with never-before-seen stealth techniques.

Digital Resilience Requires Changes In The Taxonomy Of Business IT Systems

Enterprises need to ensure they leverage Data as Assets and implement Systems of Insights to support Data-Driven Decision making across the entire Supply Chain and the broad spectrum of business processes and functions. Today enterprises are under immense pressure from Regulatory Authorities, Cyberattacks, and the pivot in customer buying patterns to prefer Trusted, Responsible and Sustainable Products. This effectively means enterprises need to look at Security and Compliance by design – which is best implemented by transitioning to Systems of Insights and Compliance. Skills and Talent are the new currency for the business. It is critical to capture the knowledge and experiences across the company both from a perspective of improving productivity and faster time to market. Many enterprises implemented Learning Management Systems (LMS) in some shape or form. Still, they were seen as a secondary system for Talent Retention and tracking employee training. 

Quote for the day:

"The final test of a leader is that he leaves behind him in other men, the conviction and the will to carry on." -- Walter Lippmann

Daily Tech Digest - November 29, 2021

The Next Evolutions of DevOps

The old adage is that complexity is like an abacus: You can shift complexity around, but it never really goes away. With the movement to shift responsibility left to development teams, this also means that associated complexity is shifting to the development teams. Modern platform engineering teams provide the infrastructure (compliant Kubernetes clusters) to teams and any workload that is run on those clusters is up to the development team that owns it. Typically, development teams then focus on features and functionality. ... If you are a DevOps or platform engineer, making your internal customers—your development teams—successful is a great goal to work toward. Crucial to this is disseminating expertise. This can be in the form of automation and education. A common practice with the DevSecOps movement is to have some sort of scanning step as part of the build or deployment process, disseminating the internals as far as how the scan is performed, what happens if something is found, etc.

Fast-paced dash to digital leaves many public services exposed

When organisations introduce new solutions to their technology stack, protection capabilities need to be extended to cover it. But faced with a global pandemic that no one could’ve seen coming, businesses needed to innovate fast, and their security measures failed to keep pace. This created a vulnerability lag, where systems and data have been left unprotected and open to attack. Veritas’ Vulnerability Lag Report explores how this gap between innovation and protection is affecting a variety of organisations, public and private; only three-fifths (61%) believe their organisation’s security measures have fully kept up since the implementation of COVID-led digital transformation initiatives. This means 39% are experiencing some form of security deficit. While such swift digital transformation has delivered a wealth of benefits for public sector organisations, there is a dark side to this accelerated innovation. In the rush to digitally transform, security has taken a back seat. As a result, there may be significant gaps just waiting for cyber criminals to exploit for their own gain.

Towards Better Data Engineering: Mostly People, But Also Process and Technology

Traditional software engineering practices involve designing, programming, and developing software that is largely stateless. On the other hand, data engineering practices focus on scaling stateful data systems and dealing with different levels of complexity. ... Setting up a data engineering culture is therefore crucial for companies to aim for long-term success. “At Sigmoid, these are the problems that we’re trying to tackle with our expertise in data engineering and help companies build a strong data culture,” said Mayur. With expertise in tools such as Spark, Kafka, Hive, Presto, MLflow, visualization tools, SQL, and open source technologies, the data engineering team at Sigmoid helps companies with building scalable data pipelines and data platforms. It allows customers to build data lakes, cloud data warehouses and set up DataOps and MLOps practices to operationalize the data pipelines and analytical model management. Transitioning from a software engineering environment to data engineering is a significant ‘cultural change’ for most companies. 

Performing Under Pressure

Regardless of the task, pressure ruthlessly diminishes our judgment, decision-making, focus, and performance. Pressure moments can disrupt our thoughts, prevent us from thinking clearly, feel frustrated, and make us act in undesirable ways. The adverse impact of pressure on our cognitive skills can downgrade our performance, make us perform below our capability, commit more errors and increase the likelihood of failure. Pressure can even make us feel embarrassed and shameful when we do fail because we can act in a way that we will otherwise not act and say or do unusual things. Consider these pressure moments. Stepping out of an important client meeting and wondering “why did I make that joke. I was so stupid” or failing to share your opinion while participating in a critical decision meeting and thinking afterward, “Why didn’t I speak up? We could have made a better decision.” Pressure can either result in wrongful action or inaction. Such events make it much more difficult to deal with the pressure next time. But there are things you can do to diminish the effects of pressure on your performance.

Behavioral biometrics: A promising tool for enhancing public safety

There are several promising applications in the field of behavioral biometrics. For computer-based identity verification, there are solutions that allow identification based on keystrokes—the frequency and patterns of which prove to be individual enough to recognize identity. Due to the nature of typing, the models can also get better because they can continuously monitor and analyze keystroke data. Software developers tend to also customize confidence thresholds depending on the use case. However, in some cases, the reliability of this behavioral biometric factor is limited to the circumstances. On a different keyboard, individual patterns may differ, and physical conditions like carpal tunnel syndrome or arthritis may affect unique abilities. The lack of benchmarks makes it difficult to compare different providers’ trained algorithms in these cases, providing room for false marketing claims. Image analysis for image recognition can provide more data for behavioral research. Gait and posture biometrics are rapidly becoming useful tools, even if they do not yet match the accuracy and robustness of traditional biometric approaches.

Privacy in Decentralized Finance: Should We Be Concerned?

It is alarming that the pace of DeFi’s growing influence is so fast-paced because many of the issues it presents are not addressed or solved enough in depth. People are investing in all sorts of cryptocurrency before they even educate themselves on how to manage private keys properly. Coupled with the lag in robust protective regulation, the general lack of awareness for DeFi’s threats to privacy inevitably results in large populations of users that are vulnerable to attack. Though some progress has been made at the state level to set standards for blockchain, there is a greater need for industry standardization at the international level. Additionally, the rapid expansion of blockchain technology in many industries is not met with sufficient safety protocols. As such, cybercriminals are aggressively taking action to target both users and exchanges of cryptocurrency in its under-secured state. On the flip side, there are some aspects about DeFi that are directly beneficial to protecting the privacy of users. When comparing the decentralized network that DeFi uses to a centralized one, DeFi’s “peer-to-peer” model is preferable because it prevents a “single source of failure”. 

Hackers Exploit MS Browser Engine Flaw Where Unpatched

The modus operandi of these attackers parallels that of the Iranian attackers, in that it follows the same execution steps. But the researchers did not specify whether the intent of this campaign appeared to be data exfiltration. AhnLab did not respond to Information Security Media Group's request for additional information. With multiple attackers actively exploiting CVE-2021-40444, firms using Microsoft Office should immediately update their software to the latest version as a prevention measure, say researchers from EST Security, which discovered yet another campaign targeting the vulnerability. In this case, the campaign used communications that attempted to impersonate the president of North Korea's Pyongyang University of Science and Technology. "The North Korean cyberthreat organization identified as the perpetrator behind this campaign is actively introducing document-based security vulnerabilities such as PDF and DOC files to customized targeted attacks such as CVE-2020-9715 and CVE-2021-40444," the EST Security researchers say. CVE-2020-9715 is a vulnerability that allows remote attackers to execute arbitrary code on affected installations of Adobe Acrobat Reader DC.

Data Mesh: an Architectural Deep Dive

Data mesh is a paradigm shift in managing and accessing analytical data at scale. Some of the words I highlighted here are really important, first of all, is the shift. I will justify why that's the case. Second is an analytical data solution. The word scale really matters here. What do we mean by analytical data? Analytical data is an aggregation of the data that gets generated running the business. It's the data that fuels our machine learning models. It's the data that fuels our reports, and the data that gives us an historical perspective. We can look backward and see how our business or services or products have been performing, and then be able to look forward and be able to predict, what is the next thing that a customer wants? Make recommendations and personalizations. All of those machine learning models can be fueled by analytical data. What does it look like? Today we are in this world with a great divide of data. The operational data is the data that sits in the databases of your applications, your legacy systems, microservices, and they keep the current state. 

Google Data Studio Vs Tableau: A Comparison Of Data Visualization Tools

Business analysts and data scientists rely on numerous tools like PowerBI, Google Data Studio, Tableau, and SAP BI, among others, to decipher information from data and make business decisions. Coming from one of the best companies in the world, Google Data Studio, launched in 2016, is a data visualisation platform for creating reports using charts and dashboards. Tableau, on the other hand, was founded more than a decade before Google Data Studio in 2003 by Chris Stolte, Pat Hanrahan, and Christian Chabot. Tableau Software is one of the most popular visual analytics platforms with very strong business intelligence capabilities. The tool is free, and the user can log in to it by using their Google credentials. Over the years, it has become a popular tool to visualise trends in businesses, keep track of client metrics, compare time-based performance of teams, etc. It is a part of the Google Marketing Platform and downloads data from Google’s marketing tools to create reports and charts. Recently, Google announced that users can now include Google Maps in embedded reports in Google Data Studio.

5 Trends Increasing the Pressure on Test Data Provisioning

Not only is the pace of system change growing; the magnitude of changes being made to complex systems today can be greater than ever. This presents a challenge to slow and overly manual data provisioning, as a substantial chunk of data might need updating or replacing based on rapid system changes. A range of practices in development have increased the rate and scale of system change. The adoption of containerization, source control, and easily reusable code libraries allow parallelized developers to rip and replace code at lightning speed. They can easily deploy new tools and technologies, developing systems that are now intricately woven webs of fast-shifting components. A test data solution today must be capable of providing consistent test data “journeys” based on the sizeable impact of these changes across interrelated system components. Data allocation must occur at the pace with which developers chop-and-change reusable and containerised components. 

Quote for the day:

"One must be convinced to convince, to have enthusiasm to stimulate the others." -- Stefan Zweig

Daily Tech Digest - November 28, 2021

Government must prove its plans to police encryption work, says ex-cyber security chief

Technology companies and cryptographers claim that the government’s demands are simply not possible - the government is in effect, trying to argue against the laws of mathematics. If the UK and US governments can read encrypted messages, so potentially can criminals, or hostile nation states such as North Korea or Russia. Extensively researched proposals to find a compromise, including proposals by Ian Levy, technical director of the National Cyber Security Centre to use “virtual crocodile clips” to listen in to encrypted communications, have failed to convince sceptics, said Martin. Plans by Apple to introduce “client-side scanning” technology to detect child abuse images before they are encrypted provoked a backlash from the world’s top cryptographic experts and internet pioneers and have now been suspended. An expert report identified over 15 ways in which states or malicious actors, and targeted abusers, could turn the technology around to cause harm to others or society. 

India: One Law To Rule Them All: On NFTs And India's Prospective Cryptocurrency Law

It is not the case that NFTs do not pose any risks. Like traditional art, which has always had a money laundering problem. NFTs pose the same (or even greater) money laundering risks. Greater risks, because the prices of NFTs are determined in private, in one-to-one trade. Like with art or real estate, the value attributed to a trade cannot be questioned and hence these assets can be sold at any price and the balance be settled for cash. One of the things that works in favour of NFTs though is that if they are on a public blockchain such as Ethereum and the user uses a centralised platform to purchase them, transactions are traceable. Other than the money laundering risks, NFTs neither pose the same category of risks, nor the same degree of risks as cryptocurrencies. NFTs are non-fungible and cannot be used as a medium of exchange as opposed to several cryptocurrencies that can be. This alleviates central bankers' concerns around monetary policy and control of cross-border payments. 

Design Pattern vs Anti Pattern in Microservices

An anti-pattern is a common response to a recurring problem that is usually ineffective and risks being highly counterproductive.” Note the reference to “a common response.” Anti-patterns are not occasional mistakes, they are common ones, and are nearly always followed with good intentions ... Ambiguous Service: An operation’s name can be too long, or a generic message’s name can be vague. It’s possible to limit element length and restrict phrases in certain instances. API Versioning: It’s possible to change an external service request’s API version in code. Delays in data processing can lead to resource problems later. Why do APIs need semantically consistent version descriptions? It’s difficult to discover bad API names. The solution is simple and can be improved in the future. Hard code points: Some services may have hard-coded IP addresses and ports, causing similar concerns. Replace an IP address, for example, by manually inserting files one by one. The current method only recognizes hard-coded IP addresses without context. Bottleneck services: A service with many users but only one flaw. 

Designing Resilient Microservices — Part 1

The more interesting question is — What do you do when you detect a dependency failure (partial or full). The obvious answer is to return an appropriate HTTP or gRPC error code to your caller, but depending on your business logic/content, you should explore a graceful degradation. For example, if your application is enabling users to track the status of the order, and the exact location of the delivery agent (which is served by a dependency) is unavailable, you could choose to use extrapolation to compute an approximate location. This is further subject to a timing threshold so that if the dependency recovers, we could pivot back to providing the most recent/accurate response. Another solution often suggested for handling of faults is retries. While the principle is simple, the more critical question is how many times should I retry and how long should I wait between retries. A misconfigured retry logic can actually take a service under stress (in brownout) to a blackout. Consider, for example, a service that has N callers and each of whom have M callers. 

DeFi Lending: When Will It Threaten Traditional Lenders?

In our view, DeFi will be disruptive for financial-services companies even if almost all applications currently relate to digital assets. Banks, insurance companies and other traditional financial firms are considering the advantages of DLT solutions and monitoring developments in the DeFi market. Ignoring this trend might lead to a wake-up call in the future, although we think this is a few years off, given that DeFi is still in its infancy. DeFi lending could improve the liquidity of certain digital assets. Holders of better-established digital assets can diversify their portfolios by pledging existing digital assets for the purchase of other types. DeFi lending can, therefore, improve liquidity within the overall digital-assets ecosystem. That said, it does not come without risk. Given the typically collateralized nature of the activities, we believe that volatility in the valuations of the digital assets posted as collateral could translate into volatility in the valuations of the digital assets acquired. The volume of activities remains relatively low, but greater DeFi-lending volumes could ultimately lead to increased contagion risks between digital assets.

The Evolution of Enterprise Architecture in an Increasingly Digital World

EA talent is hard to find. They must be comfortable with both business strategy and with the digital technologies necessary to implement the strategies. To better understand the key role played by EA teams in their companies, McKinsey conducted a survey that received over 150 responses from a variety of countries and industries. Respondents who described their companies as “digital leaders” said that EA teams add value by following several best practices, including: Engage top executives in key decisions. The most effective EA teams invest their time in understanding their company’s business needs. 60% of enterprise architects at companies considered digital leaders said they interacted most with C-suite executives and strategy departments, compared with just 24% of those in other companies. Digital transformations are more likely to succeed when a company’s senior leaders understand the impact of technology on the business “and commit their time to making decisions that seem technical but ultimately influence the success or failure of the company’s business aims.”

Turning up the scale knob on threat intelligence operations

The only way to harness the true potential of threat intelligence is to gain maximum benefit by fully leveraging that intelligence to facilitate rapid detection of and response to emerging threats. The need of the hour is modern-day threat intelligence platform (TIP) capabilities that come integrated within a comprehensive cyber fusion center that can drive the entire threat intelligence lifecycle management from ingestion to actioning and response in a fully automated way. Modern-day TIPs integrate frameworks like MITRE ATT&CK Navigator that enable you to gain insights into adversaries’ TTPs to identify trends across the kill chain and produce contextualized intelligence. Such TIPs have made operationalization of different types of threat intelligence—strategic, tactical, technical, and operational—possible for security teams. As threat intelligence continues to be the central theme in today’s cybersecurity programs, the need to scale threat intelligence capabilities has become vital for business and operational success.

Executive Q&A: The Value of Improved Data Management

There are three main challenges that enterprises face in achieving the maximum benefit from their data. First, the compounding effect of continually adding new data sources, and thus more data, dilutes the value of data under analysis. Adding demographic data enriches the data set, which is like adding electrolyte to tap water -- it is good and can be done easily. The challenge we face today is that we also have many new sources for the transaction data (e.g., from online purchases, business partners, and mobile apps). We suddenly have data for every page visit, every click, and every location. This is like upgrading a faucet to a fire hose in your kitchen. In theory you have access to a lot of water, but how much of it will go wasted if you don't have the right tool or technology to process it? Second, the increasing reliance on data captured or purchased in the cloud raises questions about how to rationalize on-premises data as part of an analytics strategy. For many organizations, data generated on premises cannot leave the confines of its firewall. This complicates the creation of a complete picture of the truth.

4 Ways Data Governance Can Improve Business Intelligence

Data is the lifeblood of all operational processes. Data is an asset that needs to be managed so that it is highly accessible, easily usable and reusable, and highly secure. Developing effective data governance can help business owners streamline all operational processes and improve decision-making, so any potential efficiency gaps are easily mitigated. When properly implemented, it can reduce data inconsistencies to a minimum and remove the risk of human error from the equation. According to Statista, the US alone saw over 1000 data breach cases with over 150 million records exposed to cybercriminals. Granted, this is lower than back in 2018 when 471 million records got exposed, and these attacks seem to be decreasing lately, but the overarching trend since 2005 is alarming. We also need to address the insight provided by an Osterman Research study stating that companies typically move, store, and archive 75% of their critical data and intellectual property within their complex ecosystems of communication channels.

13 Areas Where NFTs Have Huge Potential!

Tokenization offers more transparency, and the transactions involved are easy to execute and, most importantly, cost-effective. The representation of intellectual property is also infringing on the patent system. IP-based NFTs are one way to deal with intellectual property. The IPwe platform allows the representation of patents by storing and sharing the NFTs on this platform. The forum is hosted by the IBM Cloud and is supported by the IBM Blockchain. Clients can also trade, buy, license, finance, sell, research, and market patents there. The patent marketplace is the first of its kind, and companies benefit from treating and showcasing their patents as digital assets for security or to secure the value of their business. The freely accessible registry is supported by IBM AI and will be further expanded in the coming months. The registry features current, active, and historical patent records that can be tokenized through NFTs.

Quote for the day:

"Supreme leaders determine where generations are going and develop outstanding leaders they pass the baton to." -- Anyaele Sam Chiyson

Daily Tech Digest - November 27, 2021

Enhancing zero trust access through a context-aware security posture

A policy engine is the “brain” of a ZTA-based architecture, which dictates the level of scrutiny applied to human and machine network agents as they attempt to authenticate themselves and gain access to resources. These engines make decisions about whether to approve or deny access—or demand additional authentication factors—based on different factors including implied geolocation, time of day, threat intelligence indicators, and sensitivity of data being accessed. ZTA does not merely facilitate heightened scrutiny of network actors that behave suspiciously. It also allows for streamlined access by bona fide users to enhance productivity and reduce business interruptions resulting from security measures. Thus, properly implemented zero-trust systems achieve the best of both worlds: enhanced cybersecurity and more rapid generation and delivery of business value. To make this model even more powerful in the face of the evolving ransomware threat, I would suggest that ZTA systems incorporate additional factors—in concert with the aforementioned ones—to allow organizations to assume a context-aware security posture.

Key trends driving the workforce transformation in 2022

As employers look for ways to drive inclusion amidst new work models, connection will become a measurement of workforce culture. ADP Research Institute found that U.S. workers who feel they are Strongly Connected to their employer are 75 times more likely to be Fully Engaged than those who do not feel connected. With connection driving engagement, employers will need to heighten their focus on their people and reflect on the larger purpose that unites their workforce. Workforce flexibility will stretch beyond perceived limits and employers will embrace people-centered initiatives to build a workplace where everyone can thrive. Diversity, equity, and inclusion strategies will additionally evolve to drive true, measurable progress. ADP data shows more than 50 percent of companies that leveraged ADP DataCloud’s DEI analytics capabilities have taken action and realized positive impact on their DEI measures. With employees remaining remote and hybrid, operational and compliance considerations will grow, adding to an already complex regulatory environment. In fact, the survey found nearly 20 percent of U.S.

AI Weekly: UN recommendations point to need for AI ethics guidelines

While the policy is nonbinding, China’s support is significant because of the country’s historical — and current — stance on the use of AI surveillance technologies. According to the New York Times, the Chinese government — which has installed hundreds of millions of cameras across the country’s mainland — has piloted the use of predictive technology to sweep a person’s transaction data, location history, and social connections to determine whether they’re violent. ... Regardless of their impact, the UNESCO recommendations signal growing recognition on the part of policymakers of the need for AI ethics guidelines. The U.S. Department of Defense earlier this month published a whitepaper — circulated among National Oceanic and Atmospheric Administration, the Department of Transportation, ethics groups at the Department of Justice, the General Services Administration, and the Internal Revenue Service — outlining “responsible … guidelines” that establish processes intended to “avoid unintended consequences” in AI systems. NATO recently released an AI strategy listing the organization’s principles for “responsible use [of] AI.” 

From Naked Objects to Naked Functions

Naked Functions runs on .NET 6.0 and you can write your domain code in either C# or F# – I’ll use the former in the following code examples. The persistence layer is managed via Entity Framework Core, either relying on code conventions or explicit mapping. Naked Functions reflects over your domain code to generate a complete RESTful API – not just to the data but to all the functions too – and this RESTful API may be consumed via a Single Page Application (SPA) client. We provide a generic implementation of such a client, written in Angular. But where in Naked Objects you write only behaviourally complete domain objects, with Naked Functions you define only immutable domain types and pure side-effect free domain functions. You do not typically need to write any I/O at all, because the Naked Functions framework handles I/O with the client and the database transparently. Critically, your domain functions never make calls into the Naked Functions framework – it is entirely the other way around.

Introducing the KivaKit Framework

KivaKit is an Apache License open source Java framework designed for implementing microservices. KivaKit requires a Java 11+ virtual machine, but is source-compatible with Java 8 and 9 projects. KivaKit is composed of a set of carefully integrated mini-frameworks. Each mini-framework has a consistent design and its own focus, and can be used in concert with other mini-frameworks or on its own. ... Each mini-framework addresses a different issue that is commonly encountered when developing microservices. This article provides a brief overview of the mini-frameworks in the diagram above, and a sketch of how they can be used. ... In KivaKit, there are two ways to implement Repeater. The first is by simply extending BaseRepeater. The second is to use a stateful trait or Mixin. Implementing the RepeaterMixin interface is the same as extending BaseRepeater, but the repeater mixin can be used in a class that already has a base class. Note that the same pattern is used for the Component interface discussed below.

Your supply chain: How and why network security and infrastructure matter

Threats to the supply chain can take many forms, including malware attacks, piracy, unauthorized access to enterprise resources and data, and unintentional or maliciously injected backdoors in software source code. In addition to these threats, the hyper-connected structure of global supply chains creates additional complexity for organizations to manage and protect. Although one organization may have a strong security infrastructure in place, other firms, suppliers, and resellers they are in close communication with may not. As vendor networks become interconnected, the sharing of information (both intentional and unintentional) will occur. An accidental data leak indicates a weak spot in an organization’s network, giving the green light to malicious actors looking for a way into it. Attacks can happen at any tier of a supply chain, but most attackers will look for weaker spots to exploit, which then impacts the entire operation. Having a security-first mindset will help businesses stay ahead of threats. This means putting security at the center of the supply chain and making it a foundational element.

From digital transformation to work-life balance for talent, how the future of management consulting looks

As widespread digital acceleration occurs, a consulting firm will be expected to provide services along with cyber security, design thinking, user-interface design, digital transformation and M&A deal-making. There will be greater expectations from clients that consulting firms own a bit of the transformation and become private equity-oriented partners. A lot of consulting firms, like Bain Capital today, are likely to embrace this route. With geopolitical complexities coming in, supply chain re-alignment for risk hedging is likely to emerge as a key piece of work. Also, the emerging countries are likely to drive disproportionate growth for the industry. Another likely big change will be that all consulting firms offer the same services of strategy, design, implementation, cyber and M&A. The concept of Big 3 (McKinsey, BCG, Bain) or Big 4 (PwC, EY, Deloitte, KPMG) will be outdated since every consulting firm will compete on every deal. No case will ever be called a strategic piece of work.

What Makes A Good Product Owner?

There are many opinions about this in our community. For example, there are supposedly eight stances for Product Owners. Others argue that Product Owners are great when their team doesn’t need them. A common opinion is that Product Owners should actively experiment and test hypotheses. I gladly support these opinions. At the same time, I wonder what a scientific perspective has to offer. From our own quantitative research with 1.200 Scrum Teams, we know that teams are more effective when they are more aware of the needs of their stakeholders. And Product Owners certainly seem to play a role there. But as Unger-Windeler and her colleagues write (2019): “While [the] role is supposed to maximize the value of the product under development, there seemed to be several scattered results on how the Product Owner achieve this, as well as what actually constitutes this role in practice.” In this post, I explore scientific research that addresses the role of the Product Owner. So I opened Google Scholar and searched for all academic publications containing the word “Product Owner”.

Emerging tech in security and risk management to better protect the modern enterprise

When it comes to emerging technologies in security and risk management, Contu focused on eight areas: confidential computing; decentralized identity; passwordless authentication; secure access service edge (SASE); cloud infrastructure entitlement management (CIEM); cyber physical systems security; digital risk protection services; and external attack surface management. Many of these technologies are geared toward meeting the new requirements of multicloud and hybrid computing, Contu said. These emerging technologies also align to what Gartner has termed the “security mesh architecture,” where security is more dynamic, adaptable, and integrated to serve the needs of digitally transformed enterprises, he said. ... While still relatively new, secure access service edge (SASE) has gotten significant traction in the market because it’s a “very powerful” approach to improving security, Contu said. The term was first coined by Gartner analysts in 2019. SASE offers a more dynamic and decentralized security architecture than existing network security architectures, and it accounts for the increasing number of users, devices, applications, and data that are located outside the enterprise perimeter.

UK Legislation Seeks Mandatory Security Standards for IoT

Introduced to Parliament on Wednesday, the bill seeks to to allow "the government to ban universal default passwords, force firms to be transparent to customers about what they are doing to fix security flaws in connectable products, and create a better public reporting system for vulnerabilities found in those products," according to the government's Department for Digital, Culture, Media & Sport. The bill was developed by DCMS together with Britain's national incident response team, the National Cyber Security Center, which is part of intelligence agency GCHQ. The bill also includes a proposal to appoint a regulator to oversee compliance with the standards, backed by the ability to fine violators up to 10 million pounds ($13.3 million), or up to 4% of a firm's global revenue, whichever is greater. "The regulator will also be able to issue notices to companies, requiring that they comply with the security requirements, recall their products, or stop selling or supplying them altogether. 

Quote for the day:

"I think the greater responsibility, in terms of morality, is where leadership begins." -- Norman Lear

Daily Tech Digest - November 25, 2021

There’s a month for cyber awareness, but what about data literacy?

Just as there’s currently a month devoted to raising cybersecurity awareness, we need a data literacy month. Ideally, what we need to aim for is not just one month, but a comprehensive educational push across all industries that could benefit from data-driven decision-making. But the journey of a thousand miles starts with a single step, and an awareness month would serve as a perfect springboard. When planning such initiatives, we must make sure they do not descend into another boring PowerPoint presentation. Instead, we need to clearly demonstrate how data can help employees with the tasks they perform every day. Tailoring the training sessions to the needs of individual teams or departments, businesses must first and foremost think of situations specific employees find themselves in on a regular basis. Take a content marketing or demand generation team, for example: A simple comparison of the conversion rates on several landing pages, which they most likely work on frequently, is a good way to not just figure out the optimal language and layout, but also to introduce such statistical concepts as population, sample, and P-value.

Encouraging more women within cyber security

To begin with, whether this is from a younger age during school studies or university courses, offering varied entry pathways into the industry, or making it easier to return after a break, women must be encouraged into the field of cyber security. These hurdles into the sector have to be addressed. Each business has a part to play when it comes to ensuring that their organisation meets the requirements of all of their employees. From remote or hybrid working, reduced hours or adequate maternity and paternity support, working hours should be more flexible to suit the needs of the employee. A “return to work scheme” would greatly benefit women if companies were to implement them. This can help those who have had a break from the industry get back into work – and this doesn’t necessarily mean limiting them to roles such as customer support, sales and marketing. HR teams must also do better when it comes to job descriptions, ensuring they appeal to a wider audience, offer flexibility and that the recruitment pool is as diverse as can be.

Misconfiguration on the Cloud is as Common as it is Costly

Very few companies had the IT systems in place to handle even 10% of employees working remotely at any given time. They definitely were not built to handle a 100% remote workforce. To solve this problem and enable business operations, organizations of all sizes turned to the public cloud. The public cloud was built to be always on, available from anywhere, and could handle the surges in capacity that legacy infrastructure could not. Cloud applications were the solution to enabling remote workers and continuing business operations. With that transition came new risks: organizations were forced to rapidly adopt new access policies, deploy new applications, onboard more users to the cloud, and support them remotely. To make matters worse, the years of investment in “defense in depth” security for corporate networks suddenly became obsolete.  No one was prepared for this. It should come as no surprise that the leading causes of data breaches in the cloud can be traced back to mistakes made by the customer, not a security failure by the cloud provider. When you add to that the IT staff’s relative unfamiliarity with SaaS, the opportunities to misconfigure key settings proliferate.

From fragmented encryption chaos to uniform data protection

Encrypting data during runtime has only recently become feasible. This type of technology is built directly into the current generation public cloud infrastructure (including clouds from Amazon, Microsoft, and others), ensuring that runtime data can be fully protected even if an attacker gains root access. The technology shuts out any unauthorized data access using a combination of hardware-level memory encryption and/or memory isolation. It’s a seemingly small step that paves the way for a quantum leap in data security—especially in the cloud. Unfortunately, this protection for runtime data has limited efficacy for enterprise IT. Using it alone requires each application to be modified to run over the particular implementation for each public cloud. Generally, this involves re-coding and re-compilation—a fundamental roadblock for adoption for already stressed application delivery teams. In the end, this becomes yet another encryption/data security silo to manage—on each host—adding to the encryption chaos.

An Introduction to Event Driven Microservices

Event driven Microservices helps in the development of responsive applications as well. Let us understand this with an example. Consider the notification service we just talked about. Suppose the notification service needs to inform the user when a new notification is generated and stored in the queue. Assume that there are several concurrent users attempting to access the application and know the notifications that have been processed. In the event-driven model, all alerts are queued before being forwarded to the appropriate user. In this situation, the user does not have to wait while the notification (email, text message, etc.) is being processed. The user can continue to use the application while the notification is processed asynchronously. This is how you can make your application responsive and loosely coupled. Although traditional applications are useful for a variety of use cases, they face availability, scalability, and reliability challenges. Typically, you’d have a single database in a monolithic application. So, providing support for polyglot persistence was difficult.

Software Engineering Best Practices That High-Performing Teams Follow

There's a legacy trope of the (usually male) virtuoso coder, a loner maverick, who works in isolation, speaks to no one ... He manages to get a pass for all his misdeeds as his code is so good it could be read as a bedtime story. I'm here to say, those days are over. I cringe a bit when I hear the term, coding is a team sport, but it's true. Being a good engineer is about being a good team member. General good work practices like being reliable and honest are important. Also, owning up to your mistakes, and not taking credit for someone else's work. It's about having the ability to prioritize your own tasks and meet deadlines. But it's also about how you relate to others in your team. Do people like working with you? If you aren't sociable, then you can at least be respectful. Is your colleague stuck? Help them! You might feel smug that your knowledge or skills exceed theirs, but it's a bad look for the whole team if something ships with a bug or there's a huge delay. Support newbies, do your share of the boring work, embrace practices like pair programming. 

Top 10 DevOps Things to Be Thankful for in 2021

Low-code platforms: As the demand for applications to drive digital workflows spiked in the wake of the pandemic, professional developers relied more on low-code platforms to decrease the time required to build an application. ... Microservices: As an architecture for building applications the core concept of employing loosely coupled services together to construct an application goes all the way back to when service-oriented applications (SOA) were expected to be the next big thing in the 1990s. Microservices have, of course, been around for several years themselves. ... Observability: As a concept, observability traces its lineage to linear dynamic systems. Observability in its most basic form measures how well the internal states of a system can be inferred based on knowledge of its external outputs. In the past year, a wide range of IT vendors introduced various types of observability platforms. These make it easier for DevOps teams to query machine data in a way that enables them to proactively discover the root cause of issues before they cause further disruption.

How to Save Money on Pen Testing - Part 1

The quality of the report is the most important criterion for me when choosing a pen test vendor - provided they have adequately skilled testers. It's the report that your organization will be left with when the testers have moved on to their next engagement. Penetration testing is expensive, and the pre-canned "advice" delivered in a pen test report is often worthless and alarmist. I know; I've written my fair share of pen test reports in the past. Terms like "implement best practice" do nothing to drive the change needed to uplift an organization's security posture. Look for reports that deliver pragmatic remediation advice, including configuration and code snippets. Most importantly, review sample reports for alarmist findings such as cookie flags marked as "High Risk" - a pet hate of mine. ... Also, look for vendors that take reporting further by integrating with your ticketing system to raise tickets for issues they find or that provide videos of their hacks, which can show how simply an attacker can exploit technical security issues.

AI will soon oversee its own data management

AI brings unique capabilities to each step of the data management process, not just by virtue of its capability to sift through massive volumes looking for salient bits and bytes, but by the way it can adapt to changing environments and shifting data flows. For instance, according to David Mariani, founder of, and chief technology officer at AtScale, just in the area of data preparation, AI can automate key functions like matching, tagging, joining, and annotating. From there, it is adept at checking data quality and improving integrity before scanning volumes to identify trends and patterns that otherwise would go unnoticed. All of this is particularly useful when the data is unstructured. One of the most data-intensive industries is health care, with medical research generating a good share of the load. Small wonder, then, that clinical research organizations (CROs) are at the forefront of AI-driven data management, according to Anju Life Sciences Software. For one thing, it’s important that data sets are not overlooked or simply discarded, since doing so can throw off the results of extremely important research. 

Why digital transformation success depends on good governance

Since digital innovation is by its very nature new, business leaders should ensure that the policies, processes and governance models used support digitalisation, rather than block it, and are commensurate with the technologies that are being utilised. Appointing a core team of accountable leaders will help create focus and clarity around governance responsibilities. As governance champions, they can ensure every transformation project begins with a governance mindset and is governed by behaviours that include a desire to ‘do the right thing’. As part of this process, the digitalisation of governance processes and control mechanisms will help reduce any risk of compliance failures. Today’s governance platforms can help remove the guesswork from digital governance programmes, making it possible to devise highly structured frameworks that reduce systemic risk. Enabling organisations to rise to the challenge of becoming digital-first in a truly ethical and streamlined way. 

Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford

Daily Tech Digest - November 24, 2021

The Importance of IT Security in Your Merger Acquisition

There is no question that cybersecurity risks and threats are growing exponentially. A report from Cybersecurity Ventures estimated a ransomware attack on businesses would happen every 11 seconds in 2021. Global ransomware costs in 2021 would exceed $20 billion. It seems there are constantly new reports of major ransomware attacks, costing victims millions of dollars. Earlier this year, the major ransomware attack on Colonial Pipeline resulted in disruptions that caused fuel shortages all over the East Coast of the United States. It helped to show that ransomware attacks on critical service companies can lead to real-world consequences and widespread disruption. This world of extreme cybersecurity risks serves as the backdrop for business acquisitions and mergers. A Garner report estimated that 60% of organizations who were involved in M&A activities consider cybersecurity as a critical factor in the overall process. In addition, some 73% of businesses surveyed said that a technology acquisition was the top priority for their M&A activity, and 62% agreed there was a significant cybersecurity risk by acquiring new companies.

The Language Interpretability Tool (LIT): Interactive Exploration and Analysis of NLP Models

LIT supports local explanations, including salience maps, attention, and rich visualizations of model predictions, as well as aggregate analysis including metrics, embedding spaces, and flexible slicing. It allows users to easily hop between visualizations to test local hypotheses and validate them over a dataset. LIT provides support for counterfactual generation, in which new data points can be added on the fly, and their effect on the model visualized immediately. Side-by-side comparison allows for two models, or two individual data points, to be visualized simultaneously. More details about LIT can be found in our system demonstration paper, which was presented at EMNLP 2020. ... In order to better address the broad range of users with different interests and priorities that we hope will use LIT, we’ve built the tool to be easily customizable and extensible from the start. Using LIT on a particular NLP model and dataset only requires writing a small bit of Python code. 

How software development will change in 2022

Local development environments are now largely the only part of the software development lifecycle time that is done locally on a developer’s computer. Automated builds, staging environments and running production applications have largely moved from local computers to the cloud. Microsoft and Amazon have both been working hard on addressing this challenge. In August this year, Microsoft released GitHub Codespaces to general availability. GitHub Codespaces offers full development environments that can be accessed using just a web browser that can start in seconds. The service allows technology teams who store their code in Microsoft’s GitHub service to develop using their Visual Studio Code editor fully in the cloud. Amazon also has its own solution to this problem, with AWS Cloud9 allowing developers to edit and run their code from the cloud. Startups have also been created to address this problem – in April, Gitpod announced it had raised $13m for its solution to move software development to the cloud. 

Microservices — The Letter and the Spirit

Ideally, services don’t interact with each other directly. Instead, they use some integration service to communicate together. This is commonly achieved with a service bus. Your goal here is making each service independent from other services so that each service has all what it needs to start the job and doesn’t care what happens after it completes this job. In the exceptional cases when a service calls another service directly, it must handle the situations when that second service fails. ... Microservices presents us with an interesting challenge – on the one hand, the services should be decoupled, yet on the other hand all should be healthy for the solution to perform well so they must evolve gracefully without breaking the solution. ... There are multiple ways to do versioning, any convention would do. I like the three digits semantic versioning 0.0.0 as it is widely understood by most developers and it is easy to tell what type of changes the service made by just looking at what digit of the three got updated. 

All Roads Lead To OpenVPN: Pwning Industrial Remote Access Clients

OpenVPN was written by James Yonan and is free software, available under the terms of the GNU General Public License version 2 (GPLv2). As a result, many different systems support OpenVPN. For example, DD-WRT, a Linux-based firmware used in wireless routers, includes a server for OpenVPN. Due to its popularity, ease of use, and features, many companies have chosen OpenVPN as part of their solution. It’s a feasible option for organizations that want to create a secure tunnel with a couple of new features. Rather than reinventing the wheel, the company will most likely use OpenVPN as its foundation. In the past year, due to the increased popularity and growing remote workforce, Claroty Team82 was busy researching VPN/remote-access solutions. The majority of them included OpenVPN as part of the secure remote access solution while the vendor application is a wrapper that manages the OpenVPN instance. After inspecting a couple of such products, we identified a key problem with the way these types of products harness OpenVPN—a problem that, in most cases, can lead to a remote code execution just by luring a victim to a malicious website.

More Stealthier Version of BrazKing Android Malware Spotted in the Wild

"It turns out that its developers have been working on making the malware more agile than before, moving its core overlay mechanism to pull fake overlay screens from the command-and-control (C2) server in real-time," IBM X-Force researcher Shahar Tavor noted in a technical deep dive published last week. "The malware […] allows the attacker to log keystrokes, extract the password, take over, initiate a transaction, and grab other transaction authorization details to complete it." The infection routine kicks off with a social engineering message that includes a link to an HTTPS website that warns prospective victims about security issues in their devices, while prompting an option to update the operating system to the latest version. ... BrazKing, like its predecessor, abuses accessibility permissions to perform overlay attacks on banking apps, but instead of retrieving a fake screen from a hardcoded URL and present it on top of the legitimate app, the process is now conducted on the server-side so that the list of targeted apps can be modified without making changes to the malware itself.

Common Cloud Misconfigurations Exploited in Minutes, Report

Unit 42 conducted the current cloud-misconfiguration study between July 2021 and August 2021, deploying 320 honeypots with even distributions of SSH, Samba, Postgres and RDP across four regions–North America (NA), Asia Pacific (APAC) and Europe (EU). Their research analyzed the time, frequency and origins of the attacks observed during that time in the infrastructure. To lure attackers, researchers intentionally configured a few accounts with weak credentials such as admin:admin, guest:guest, administrator:password, which granted limited access to the application in a sandboxed environment. They reset honeypots after a compromising event—i.e., when a threat actor successfully authenticated via one of the credentials and gained access to the application. ... The team analyzed attacks according to a variety of attack patterns, including: the time attackers took to discover and compromise a new service; the average time between two consecutive compromising events of a targeted application; the number of attacker IPs observed on a honeypot; and the number of days an attacker IP was observed.

Getting real about DEI means getting personal

Leaders also need to know themselves and their own biases. “We learn biases through the media, family, friends, and educators over time and often don’t realize that they’re causing harm,” Epler explained. She called out her own struggles with nonbinary gender pronouns. I can relate. When you grow up in a Dick-and-Jane world, it isn’t easy to switch pronouns and learn new ones that conflict with grammatical rules that have become baked into your DNA after decades of writing. If you aren’t aware of your biases, they are likely to manifest in microaggressions, if not something worse. “Microaggressions are everyday slights, insults, and negative verbal and nonverbal communications that, whether intentional or not, can make someone feel belittled, disrespected, unheard, unsafe, other, tokenized, gaslighted, impeded, and/or like they don’t belong,” writes Epler in her book. When leaders witness microaggressions, they must defend the people subjected to them.

IT hiring: 5 ways to attract talent amidst the Great Resignation

By now, perhaps your organization has its remote work environment down to a science. Ask yourself what resources you can promote to potential new hires that will instill confidence in their decision to move forward with your company. Especially for recent graduates just entering the workforce, a commitment to help them transition and build success from the start can help move the needle in your organization’s favor. Earlier this year, for example, social media software company Buffer found success by offering new hires $500 to set up their home office. According to one employee engagement blog, Buffer also offers its employees coworking space stipends and internet reimbursement. To increase engagement and productivity, consider what portion of your resources you can allocate to designing a premium onboarding experience for new hires. A strong career growth curve is a must-have for recent grads. Making your career advancement initiatives clear in the early stages of the recruiting process is a win-win for organizations and employees alike.

Report: China to Target Encrypted Data as Quantum Advances

The Booz Allen Hamilton researchers note that since approximately 2016, China has emerged as a major quantum-computing research and development center, backed by substantial policy support at the highest levels of its government. Still, the country's quantum experts have suggested that they remain behind the U.S. in several quantum categories - though China hopes to surpass the U.S. by the mid-2020s. While experts say this is unlikely, China may surpass Western nations in early use cases, the report states. Advancements in quantum simulations, the researchers contend, may expedite the discovery of new drugs, high-performance materials and fertilizers, among other key products. These are areas that align with the country's strategic economic plan, which historically parallels its economic espionage efforts. "In the 2020s, Chinese economic espionage will likely increasingly steal data that could be used to feed quantum simulations," researchers say, though they claim it is unlikely that Chinese computer scientists will be able to break current-generation encryption before 2030. 

Otomi: OSS Developer Self-Service for Kubernetes

The ultimate goal of developer self-service is to have less friction in the development process and ensure that developers can deliver customer value faster. This can be achieved by enabling the separation of concerns for both dev and ops teams. The ops team manages the stack and enforces governance and compliance to security policies and best practices. Dev teams can create new environments on-demand, create and expose services using best practices, use ready-made templatized options, and get direct access to all the tools they need for visibility. Think of it as paving the road towards fast delivery and minimizing risks by providing safeguards and standards. Developers can do what they need to do and do it when they like to. And yes, sometimes not always how they would like to do it. The only challenge here is, building a platform like this takes a lot of time and not all organizations have the resources to do so. The goal behind the Otomi open-source project was to offer a single deployable package that offers all of this out-of-the-box.

Quote for the day: 

"Leaders who won't own failures become failures." -- Orrin Woodward