Daily Tech Digest - April 30, 2022

Deep Dive into CQRS — A Great Microservices Pattern

If you want to implement the CQRS pattern into an API, it is not enough to separate the routes via POST and GET. You also have to think about how you can ensure that command doesn’t return anything or at least nothing but metadata. The situation is similar to the Query API. Here, the URL path describes the desired query, but in this case, the parameters are transmitted using the query string since it is a GET request. Since the queries access the read-optimized denormalized database, the queries can be executed quickly and efficiently. However, the problem is that without regularly pulling the query routes, a client does not find out whether a command has already been processed and what the result was. Therefore, it is recommended to use a third API, the Events API, which informs about events via push notifications via web sockets, HTTP streaming, or a similar mechanism. Anyone who knows GraphQL and is reminded of the concepts of mutation, query, and subscription when describing the commands, the query, and the events API is on the right track: GraphQL is ideal for implementing CQRS-based APIs.

Why hybrid intelligence is the future of artificial intelligence at McKinsey

“One thing that hasn’t changed: our original principle of combining the brilliance of the human mind and domain expertise with innovative technology to solve the most difficult problems,” explains Alex Sukharevsky. “We call it hybrid intelligence, and it starts from day one on every project.” AI initiatives are known to be challenging; only one in ten pilots moves into production with significant results. “Adoption and scaling aren’t things you add at the tail end of a project; they’re where you need to start,” points out Alex Singla. “We bring our technical leaders together with industry and subject-matter experts so they are part of one process, co-creating solutions and iterating models. They come to the table with the day-to-day insights of running the business that you’ll never just pick up from the data alone.” Our end-to-end and transformative approach is what sets McKinsey apart. Clients are taking notice: two years ago, most of our AI work was single use cases, and now roughly half is transformational. Another differentiating factor is the assets created by QuantumBlack Labs. 

Top 10 FWaaS providers of 2022

As cloud solutions continued to evolve, cloud-based security services had to follow their lead and this is how firewall as a service (FWaaS) came into existence. In short, FWaaS took the last stage of firewall evolution - the next-generation firewall (NGFW) - and moved it from a physical device to the cloud. There are plenty of benefits of employing FWaaS in your systems in place of an old-fashioned firewall and some of them are simplicity, superior scalability, improved visibility and control, protection of remote workers, and cost-effectiveness. ... Unlike old-fashioned firewalls, Perimeter 81’s solution can safeguard multiple networks and control access to all data and resources of an organization. Some of its core features include identity-based access, global gateways, precise network segmentation, object-based configuration management, multi-site management, protected DNS system, safe remote work, a wide variety of integrations, flexible features, and scalable pricing. ... Secucloud’s FWaaS is a zero-trust, next-gen, AI-based solution that utilizes threat intelligence feed, secures traffic through its own VPN tunnel, and operates as a proxy providing an additional layer of security to your infrastructure.

Automated Security Alert Remediation: A Closer Look

To properly implement automatic security alert remediation, you must choose the remediation workflow that works best for your organization. Alert management works with workflows that are scripted to match a certain rule to identify possible vulnerabilities and execute resolution tasks. With automation, workflows are automatically triggered by following asset rules and constantly inspecting the remediation activity logs to execute remediation. To improve mean time to response and remediation, organizations create automated remediation workflows. For example, remediation alert playbooks aid in investigating events, blocking IP addresses or adding an IOC on a cloud firewall. There are also interactive playbooks that can help remediate issues like a DLP incident on a SaaS platform while also educating the user via dynamic interactions using the company’s communication tools. The typical alert remediation workflow consists of multiple steps. It begins with the creation of a new asset policy followed by the selection of a remediation action rule and concludes with the continued observation of the automatically quarantined rules.

Experts outline keys for strong data governance framework

It's needed to manage risk, which could be anything from the use of low-quality data that leads to a bad decision to potentially running afoul of regulatory restrictions. And it's also needed to foster informed decisions that lead to growth. But setting limits on which employees can use what data, while further limiting how certain employees can use data depending on their roles, and simultaneously encouraging those same employees to explore and innovate with data are seemingly opposing principles. So a good data governance framework finds an equilibrium between risk management and enablement, according to Sean Hewitt, president and CEO of Succeed Data Governance Services, who spoke during a virtual event on April 26 hosted by Eckerson Group on data governance. A good data governance framework instills confidence in employees that whatever data exploration and decision-making they do in their roles, they're doing so with proper governance guardrails in place so they're exploring and making decisions safely and securely and won't hurt their organization.

Augmented data management: Data fabric versus data mesh

The data fabric architectural approach can simplify data access in an organization and facilitate self-service data consumption at scale. This approach breaks down data silos, allowing for new opportunities to shape data governance, data integration, single customer views and trustworthy AI implementations among other common industry use cases. Since its uniquely metadata-driven, the abstraction layer of a data fabric makes it easier to model, integrate and query any data sources, build data pipelines, and integrate data in real-time. A data fabric also streamlines deriving insights from data through better data observability and data quality by automating manual tasks across data platforms using machine learning. ... The data mesh architecture is an approach that aligns data sources by business domains, or functions, with data owners. With data ownership decentralization, data owners can create data products for their respective domains, meaning data consumers, both data scientist and business users, can use a combination of these data products for data analytics and data science.

Embracing the Platinum Rule for Data

It’s much easier to innovate around one platform and one set of data. Making this a business and not an IT imperative, you can connect data into the applications that matter. For example, creating a streamlined procure-to-pay and order-to-cash process is possible only because we’ve broken down data silos. We are now capable of distributing new customer orders to the optimum distribution facility based on the final destination and available inventory in minutes vs. multiple phone calls and data entry in multiple systems that previously would have taken hours and resources. The speed and effectiveness of these processes has led to multiple customer awards. Our teams need to store data in ways that is harmonized before our users start to digest and analyze the information. Today many organizations have data in multiple data lakes and data warehouses, which increases the time to insights and increases the chance for error because of multiple data formats. ... As data flows through Prism, we’re able to visualize that same data across multiple platforms while being confident in one source of the truth.

The Purpose of Enterprise Architecture

The primary purpose of the models is to facilitate the architect to understand the system being examined. Understand how it works today, understand how it can be most effectively changed to reach the aspirations of the stakeholders, and understand the implications and impacts of the change. A secondary purpose is re-use. It is simply inefficient to re-describe the Enterprise. The efficiency of consistency is balanced against the extra energy to describe more than is needed, and to train those who describe and read the descriptions on formal modeling. The size, geographic distribution, and purpose of the EA team will dramatically impact the level of consistency and formality required. Formal models are substantially more re-usable than informal models. Formal models are substantially easier to extend across work teams. The penalty is that formal models require semantic precision. For example, regardless of the structure of an application in the real world, it must be represented in a model conforming to the formal definition. This representation is possible with a good model definition.

Staying Agile: Five Trends for Enterprise Architecture

Continuous improvement is a cornerstone of agile digital business design. Organizations want to deliver more change, with higher quality results, simultaneously. Progressive, mature EAs are now designing the system that builds the system, redesigning and refactoring the enterprise’s way-of-working. This goal is a fundamental driver for many of these trends. In the pursuit of this trend, it’s important to remember that the perfect business design isn’t easily achievable. Trying one approach, learning through continuous feedback and making adjustments is a rinse and repeat process. For example, a business might use the Team Topologies technique to analyze the types of work that teams are performing and then reorganize those teams to in order to minimize cognitive loads – for instance by assigning one set of teams to focus on a particular value stream while others focus solely on enabling technical capabilities. These adjustments might need to happen multiple times until the right balance is found to ensure optimal delivery of customer value and team autonomy.

Blockchain and GDPR

Given that the ruling grants EU persons the right to contest automated decisions, and smart contracts running on a blockchain are effectively making automated decisions, the GDPR needs to be taken in to account when developing and deploying smart contracts that use personal data in the decision making process, and produce a legal effect or other similarly significant effect.Smart contract over-rides. The simplest means of ensuring smart contract compliance is to include code within the contract that allows a contract owner to reverse any transaction conducted. There are however a number of problems that could arise from this. ... As the appeal time can be long, many such actions may have been taken after the original contract decision, and it may not even be possible to roll back all the actions. Consent and contractual law. A second approach is to ensure that the users activating the smart contract are aware that they are entering into such a contract, and that they provide explicit consent. The GDPR provides the possibility of waiving the contesting of automated decisions under such terms, but the smart contract would require putting on hold any subsequent actions to be taken until consent is obtained.

Quote for the day:

"Making good decisions is a crucial skill at every level." -- Peter Drucker

Daily Tech Digest - April 29, 2022

Scrumfall: When Agile becomes Waterfall by another name

Agile is supposed to be centered on people, not processes — on people collaborating closely to solve problems together in a culture of autonomy and mutual respect, a sustainable culture that values the health, growth, and satisfaction of every individual. There is a faith embedded in the manifesto that this approach to software engineering is both necessary and superior to older models, such as Waterfall. Necessary because of the inherent complexity and indeterminacy of software engineering. Superior because it leverages the full collaborative might of everyone’s intelligence. But this is secondary to Agile’s most fundamental idea: We value people. It’s a rare employer today who doesn’t pay lip service to that idea. “We value our people.” But many businesses instead prioritize controlling their commodity human resources. This now being unacceptable to say out loud — in software engineering circles as in much of modern America — many companies have dressed it up in Scrum’s clothing, claiming Agile ideology while reasserting Waterfall’s hierarchical micromanagement.

Nerd Cells, ‘Super-Calculating’ Network in the Human Brain Discovered

After five years of research into the theory of the continuous attractor network, or CAN, Charlotte Boccara and her group of scientists at the Institute of Basic Medical Sciences at the University of Oslo, now at the Center for Molecular Medicine Norway (NCMM), have made a breakthrough. “We are the first to clearly establish that the human brain actually contains such ‘nerd cells’ or ‘super-calculators’ put forward by the CAN theory. We found nerve cells that code for speed, position and direction all at once,” says Boccara. ... The CAN theory hypothesizes that a hidden layer of nerve cells perform complex math and compile vast amounts of information about speed, position and direction, just as NASA’s scientists do when they are adjusting a rocket trajectory. “Previously, the existence of the hidden layer was only a theory for which no clear proof existed. Now we have succeeded in finding robust evidence for the actual existence of such a brain’s ‘nerd center,'” says the researcher,—and as such we fill in a piece of the puzzle that was missing.

Data Center Sustainability Using Digital Twins And Seagate Data Center Sustainability

Rozmanith said that Dessault’s digital twins data center construction simulation reduced time to market by 15%. He also said that the modular approach reduces design time by 20%. Their overall goal is to shorten data center stand-up time by 50% and reduce the waste commonly generated in data center construction. Even after construction, digital twins for the operation of a data center will be useful for evaluating and planning future upgrades and data center changes. Some data center companies, such as Apple have designed their data centers to be 100% sustainable for several years. Seagate recently announced that it would power its global footprint with 100% renewable energy by 2030 and achieve carbon neutrality by 2040. These goals were announced in conjunction with the release of the company’s 16th Global Citizenship Annual Report. That report included a look at the company’s annual progress towards meeting emission reduction targets, product stewardship, talent enablement, diversity goals, labor standards, fair trade, supply chain, and more.

Industry 4.0 – why smart manufacturing is moving closer to the edge

With Industry 4.0, new technologies are being built into the factory to drive increased automation. This all leads to potentially smart factories that can, for instance, benefit from predictive maintenance, as well as improved quality assurance and worker safety. At the same time, existing data challenges can be overcome. Companies operating across multiple locations often struggle to remove data silos and bring IT and OT (operational technology) together. An edge based on an open hybrid infrastructure can help them do this, as well as solving other problems. These problems include reducing latency as a result of supporting a horizontal data framework across the organization's entire IT infrastructure, instead of relying on data being funneled through a centralized network that can cause bottlenecks. Edge computing opens hybrid-aligned to cloud services can also reduce the amount of mismatched and inefficient hardware that has gradually built up, and which is located in often tight remote spaces too.

Digital twins: The art of the possible in product development and beyond

Digital twins are increasingly being used to improve future product generations. An electric-vehicle (EV) manufacturer, for example, uses live data from more than 80 sensors to track energy consumption under different driving regimes and in varying weather conditions. Analysis of that data allows it to upgrade its vehicle control software, with some updates introduced into new vehicles and others delivered over the air to existing customers. Developers of autonomous-driving systems, meanwhile, are increasingly developing their technology in virtual environments. The training and validation of algorithms in a simulated environment is safer and cheaper than real-world tests. Moreover, the ability to run numerous simulations in parallel has accelerated the testing process by more than 10,000 times. ... The adoption of digital twins is currently gaining momentum across industries, as companies aim to reap the benefits of various types of digital twins. Given the many different shapes and forms of digital twins, and the different starting points of each organization, a clear strategy is needed to help prioritize where to focus digital-twin development and what steps to take to capture the most value.

What Is Cloud-Native?

Cloud-native, according to most definitions, is an approach to software design, implementation, and deployment that aims to take full advantage of cloud-based services and delivery models. Cloud-native applications also typically operate using a distributed architecture. That means that application functionality is broken into multiple services, which are then spread across a hosting environment instead of being consolidated on a single server. Somewhat confusingly, cloud-native applications don't necessarily run in the cloud. It's possible to build an application according to cloud-native principles and deploy it on-premises using a platform such as Kubernetes, which mimics the distributed, service-based delivery model of cloud environments. Nonetheless, most cloud-native applications run in the cloud. And any application designed according to cloud-native principles is certainly capable of running in the cloud. ... Cloud-native is a high-level concept rather than a specific type of application architecture, design, or delivery process. Thus, there are multiple ways to create cloud-native software and a variety of tools that can help do it.

Predictive Analytics Could Very Well Be The Future Of Cybersecurity

Predictive analytics is gaining momentum in every industry, enabling organizations to streamline the way they do business. This branch of advanced analytics is concerned with the use of data, statistical algorithms, and machine learning to determine future performance. When it comes to data breaches, predictive analytics is making waves. Enterprises with a limited security staff can stay safe from intricate attacks. Predictive analytics tells them where threat actors tried to attack in the past, so it helps to see where they’ll strike next. Good security starts with knowing what attacks are to be feared. The conventional approach to fighting cybercrime is collecting data about malware, data breaches, phishing campaigns, and so on. Relevant information is extracted from those signatures. By signatures, it’s meant a one-of-a-kind arrangement of information that can be used to identify a cybercriminal’s attempt to exploit an operating system or an app’s vulnerability. The signatures can be compared against files, network traffic, and emails that flow in and out of the network to detect abnormalities. Everyone has distinct usage habits that technology can learn.

A Shift in Computer Vision is Coming

Neuromorphic technologies are those inspired by biological systems, including the ultimate computer, the brain and its compute elements, the neurons. The problem is that no–one fully understands exactly how neurons work. While we know that neurons act on incoming electrical signals called spikes, until relatively recently, researchers characterized neurons as rather sloppy, thinking only the number of spikes mattered. This hypothesis persisted for decades. More recent work has proven that the timing of these spikes is absolutely critical, and that the architecture of the brain is creating delays in these spikes to encode information. Today’s spiking neural networks, which emulate the spike signals seen in the brain, are simplified versions of the real thing — often binary representations of spikes. “I receive a 1, I wake up, I compute, I sleep,” Benosman explained. The reality is much more complex. When a spike arrives, the neuron starts integrating the value of the spike over time; there is also leakage from the neuron meaning the result is dynamic. There are also around 50 different types of neurons with 50 different integration profiles.

Implementing a Secure Service Mesh

One of our main goals with using a service mesh was to get Mutual Transport Layer Security (mTLS) between internal pod services for security. However, using a service mesh provides many other benefits because it allows workloads to talk between multiple Kubernetes clusters or run 100% bare-metal apps connected to Kubernetes. It offers tracing, logging around connections between pods, and it can output connection endpoint health metrics to Prometheus. This diagram shows what a workload might look like before implementing a service mesh. In the example on the left, teams are spending time building pipes instead of building products or services, common functionality is duplicated across services, there are inconsistent security and observability practices, and there are black-box implementations with no visibility. On the right, after implementing a service mesh, the same team can focus on building products and services. They’re able to build efficient distributed architectures that are ready to scale, observability is consistent across multiple platforms, and it’s easier to enforce security and compliance best practices.

5 Must-Have Features of Backup as a Service For Hybrid Environments

New backup as a service offerings have redefined backup and recovery with the simplicity and flexibility of the cloud experience. Cloud-native services can eliminate complexity of protecting your data and free you from the day-to-day hassles of managing the backup infrastructure. The innovative approach to backup lets you meet SLAs in hybrid cloud environments, and simplifies your infrastructure, driving significant value for your organization. Resilient data protection is key to always-on availability for data and applications in today’s changing hybrid cloud environments. While every organization has its own set of requirements, I would advise you to focus on cost efficiency, simplicity, performance, scalability, and future-readiness when architecting your strategy and evaluating new technologies. The simplest choice: A backup as a service solution that integrates all of these features in a pay-as-you-go consumption model. Modern solutions are architected to support today’s challenging IT environments.

Quote for the day:

"Leadership is like beauty; it's hard to define, but you know it when you see it." -- Warren Bennis

Daily Tech Digest - April 28, 2022

MPLS, SDN, even SD-WAN can give you the network observability you need

The starting point in traffic management is to examine your router policies to see whether you’re picking routes correctly, but sometimes even controlling routing policies won’t get your flows going along the routes you want. If that’s the case, you have a traffic-management issue to address. The best tools to add traffic management capability are MPLS and SDN. MPLS lets routers build routes by threading an explicit path through routers. SDN eliminates the whole concept of adaptive routing and convergence by having a central controller maintain a global route map that it gives to each SDN switch, and that it updates in response to failures or congestion. If your network consists of a VPN service and a complicated LAN, SDN is likely the better option. If you actually have a complex router network, MPLS is likely the right choice. With either MPLS or SDN, you know where your flows are because you put them there. There’s also the option of virtual networking, if neither MPLS nor SDN seems to fit your needs. Almost all the major network vendors offer virtual networks that use a second routing layer, and by putting virtual-network routers at critical places you can create explicit routes for your traffic.

Build desktop and mobile UIs with Blazor Hybrid apps

There’s a lot to like about this approach to UIs. For one, it builds on what I consider to be the key lesson of the last decade on the web: We need to design our APIs first. That makes UI just another API client, using REST and JSON to communicate with microservices. We can then have many different UIs working against the same back end, all using the same calls and having the same impact on our service. It simplifies design and allows us to predictably scale application architectures. At the same time, a fixed set of APIs means that service owners can update and upgrade their code without affecting clients. That approach led to the development of concepts like the Jamstack, using JavaScript, APIs, and Markup to deliver dynamic static websites, simplifying web application design and publishing. Blazor Hybrid takes those concepts and brings them to your code while skipping the browser and embedding a rendering surface alongside the rest of your application. You can work offline where necessary, a model that becomes even more interesting when working with locked-down environments such as the Windows 11 SE educational platform.

Parallel streams in Java: Benchmarking and performance considerations

The Stream API brought a new programming paradigm to Java: a declarative way of processing data using streams—expressing what should be done to the values and not how it should be done. More importantly, the API allows you to harness the power of multicore architectures for the parallel processing of data. There are two kinds of streams.A sequential stream is one whose elements are processed sequentially (as in a for loop) when the stream pipeline is executed by a single thread. A parallel stream is split into multiple substreams that are processed in parallel by multiple instances of the stream pipeline being executed by multiple threads, and their intermediate results are combined to create the final result. A parallel stream can be created only directly on a collection by invoking the Collection.parallelStream() method. The sequential or parallel mode of an existing stream can be modified by calling the BaseStream.sequential() and BaseStream.parallel() intermediate operations, respectively. A stream is executed sequentially or in parallel depending on the execution mode of the stream on which the terminal operation is initiated.

Design-First Approach to API Development

Design-First begins with both technical and non-technical individuals from each of the functions involved participating in the process of writing a contract that defines the purpose and function of the API (or set of APIs). Obviously, this approach requires some time upfront spent on planning. This phase aims to ensure that when it comes time to start coding, developers are writing code that won't need to be scrapped and rewritten later down the line. This helps create iterative, useful APIs that, in turn, lead to a better, more scalable API program — and value to your business — as a whole. Regardless of which approach you choose, the most critical thing to think about is how to deliver positive experiences for stakeholders, including end-users, third-party or in-house developers, and even folks from the rest of the company who may have a role. I think of APIs like technology ambassadors — the digital face of a brand — as they form a network of internal and external connections. And as such, they should be designed and crafted with care, just like any other product or service that your company offers.

At Western Digital, we recognize the importance of doing our part to contain global temperature rise. So it was important to pledge and set our ambitious goal to help limit the increase to less than 1.5°C by 2030. While we’ve made significant improvements the past few years, we have a lot of work to do to achieve our goal. It is particularly challenging to achieve the goal while the factory is going through expansion. So that’s why we rely on 4IR technologies to drive eco-efficiency. ... Our strategy hinges on three approaches: accountability, digital, and partnerships. First, it’s about setting bold climate commitments that demonstrate our accountability to making science-based progress. For more than three decades, we’ve been setting publicly facing environmental goals. And we continue to commit to bold goals, including the intention to source 100 percent of our global electricity needs from renewable sources by 2025 and to be carbon neutral in our global operations by 2030. Along with that, we’re harnessing digital and Industry 4.0 advanced-manufacturing technologies to reduce our carbon footprint and, to your earlier point, drive greater resilience. 

7 leadership traits major enterprises look for in a CIO

A resourceful CIO is able to blend prior experience with multiple variables, such as accepted frameworks, methodologies, and cultural and political landscapes. “In essence, the new CIO, when effectively using resourcefulness, is in the best position to challenge the current paradigm of the enterprise and chart the path forward,” says Greg Bentham, vice president of cloud infrastructure services at business advisory firm Capgemini Americas. Joining a major enterprise and establishing trust within a new organization is perhaps the most challenging task a CIO will ever face. Many obstacles will inevitably surface and need to be resolved. While prior experience and frameworks can be applied, reality suggests that history never exactly repeats itself. Top enterprises expect that their new CIO will possess the knowledge and creativity to overcome even the most challenging barriers. The best way to become resourceful is through direct experience gathered throughout an IT career, particularly experiences that spurred organizational changes, Bentham says.

How to use data analytics to improve quality of life

In a perfect world, employees in labor-intensive roles will be re-trained to tackle more creative and complex problem-solving tasks. Less-experienced workers will be able to quickly skill up with AI-augmented on-the-job training. In some cases, AI-equipped cameras are already enhancing, rather than replacing, human labor. By monitoring assembly-line production, tracking worker steps and processing findings into actionable feedback, this data technology can deliver valuable movement-efficiency training to employees on the line – including how to safely and efficiently move and operate in spaces shared by humans and robots. Yet who’s footing the bill here? How do business owners benefit from the adoption (and, of course, investment in) data technology? First and foremost is the obvious and immediate benefit of reducing lost labor hours due to injuries and worker-compensation-related costs. But there is also the knock-on effect of promoting a healthier and (hopefully) happier workforce. The question then becomes how to gain the buy-in of labor. 

Building the right tech setup for a multi-office organisation

IT and facilities teams sometimes rely on strong third-party relationships to enable multi-location collaboration. This often means having a good relationship with a telecommunication service provider (or providers, depending on the internet services available in the various locations), complete with a service level agreement that specifies the exact network performance standards to be met. Likewise, it’s essential to have built trust between all the companies that deliver the organisation’s collaboration technology, whether hardware or software. It may also be that IT teams rely on local managed service providers to provide on-site support on their behalf. Collaboration, and the technology that enables it, has become a core tenet of the post-pandemic workplace – but it means different things to different organisations. Sometimes, it’s about internal communication using voice and videoconferencing, messaging, and webinars. Perhaps these integrate with an office productivity suite or customer relationship management software, enhancing productivity and communication with colleagues, clients, or prospects. Other times, it’s about implementing the best solutions for your office space.

How edge computing can bolster aviation sector innovation

Edge cloud networks can provide continuous high-bandwidth connectivity between aircraft and the internet. This enables data transmission even in mid-air, with edge computing providing a filter for the most relevant information – reducing overall bandwidth usage. Servers on the ground can then selectively pull data from the edge servers on the aircraft for more detailed, real-time analysis – helping to spot potential problems and advise immediate remedial actions. This high-bandwidth connectivity can send the information needed to allow airlines to predict components and other failures before their occurrence and empower organisations to take the necessary steps to address these faults. Systems can generate automatic notifications from the plane to enable ground crews to prepare for repairs at the next landing point. Maintenance teams can more easily manage their parts and resources with access to detailed information. Edge computing also holds potential for enabling aviation operators to develop a mobility infrastructure that incorporates intelligent connected vehicles within a more extensive transportation network. 

How AI can close gaps in cybersecurity tech stacks

There are five strategies cybersecurity vendors should rely on to help their enterprise customers close widening gaps in their security tech stacks. Based on conversations with endpoint security, IAM, PAM, patch management and remote browser isolation (RBI) providers and their partners, these strategies are beginning to emerge in a dominate way among the cybersecurity landscape. ... Enterprises need better tools to assess risks and vulnerabilities to identify and close gaps in tech stacks. As a result, there’s a growing interest in using Risk-Based Vulnerability Management (RBVM) that can scale across cloud, mobile IoT and IIoT devices today. Endpoint Detection & Response (EDR) vendors are moving into RBVM with vulnerability assessment tools. Leading vendors include CODA Footprint, CyCognito, Recorded Future, Qualys and others. Ivanti’s acquisition of RiskSense delivered its first product this month, Ivanti Neurons for Risk-Based Vulnerability Management (RBVM). What’s noteworthy about Ivanti’s release is that it is the first RBVM system that relies on a state engine to measure, prioritize and control cybersecurity risks to protect enterprises against ransomware and advanced cyber threats.

Quote for the day:

"The art of communication is the language of leadership." -- James Humes

Daily Tech Digest - April 27, 2022

Think of search as the application platform, not just a feature

As a developer, the decisions you make today in how you implement search will either set you up to prosper, or block your future use cases and ability to capture this fast-evolving world of vector representation and multi-modal information retrieval. One severely blocking mindset is relying on SQL LIKE queries. This old relational database approach is a dead end for delivering search in your application platform. LIKE queries simply don’t match the capabilities or features built into Lucene or other modern search engines. They’re also detrimental to the performance of your operational workload, leading to the over-use of resources through greedy quantifiers. These are fossils—artifacts of SQL from 60 or 70 years ago, which is like a few dozen millennia in application development. Another common architectural pitfall is proprietary search engines that force you to replicate all of your application data to the search engine when you really only need the searchable fields.

What Is a Data Reliability Engineer, and Do You Really Need One?

It’s still early days for this developing field, but companies like DoorDash, Disney Streaming Services, and Equifax are already starting to hire data reliability engineers. The most important job for a data reliability engineer is to ensure high-quality data is readily available across the organization and trustworthy. When broken data pipelines strike (because they will at one point or another), data reliability engineers should be the first to discover data quality issues. However, that’s not always the case. Insufficient data is first discovered downstream in dashboards and reports instead of in the pipeline – or even before. Since data is rarely ever in its ideal, perfectly reliable state, the data reliability engineer is more often tasked with putting the tooling (like data observability platforms and testing) and processes (like CI/CD) in place to ensure that when issues happen, they’re quickly resolved. The impact is conveyed to those who need to know. Much like site reliability engineers are a natural extension of the software engineering team, data reliability engineers are an extension of the data and analytics team.

Mitigating Insider Security Threats in Healthcare

Some security experts say that risks involving insiders and cloud-based data are often misjudged by entities. "One of the biggest mistakes entities make when shifting to the cloud is to think that the cloud is a panacea for their security challenges and that security is now totally in the hands of the cloud service," says privacy and cybersecurity attorney Erik Weinick of the law firm Otterbourg PC. "Even entities that are fully cloud-based must be responsible for their own privacy and cybersecurity, and threat actors can just as readily lock users out of the cloud as they can from an office-based server if they are able to capitalize on vulnerabilities such as weak user passwords or system architecture that allows all users to have access to all of an entity's data, as opposed to just what that user needs to perform their specific job function," he says. Dave Bailey, vice president of security services as privacy and security consultancy CynergisTek, says that when entities assess threats to data within the cloud, it is incredibly important to develop and maintain solid security practices, including continuous monitoring.

Is cybersecurity talent shortage a myth?

It is a combination of things but yes, in part technology is to blame. Vendors have made the operation of the technologies they designed an afterthought. These technologies were never made to be operated efficiently. There is also a certain fixation to technologies that just don’t offer any value yet we keep putting a lot of work towards them, like SIEMs. Unfortunately, many technologies are built upon legacy systems. This means that they carry those systems’ weaknesses and suboptimal features that were adapted from other intended purposes. For example, many people still manage alerts using cumbersome SIEMs that were originally intended to be log accumulators. The alternative is ‘first principles’ design, where the technology is developed with a particular purpose in mind. Some vendors assume that their operators are the elites of the IT world, with the highest qualifications, extensive experience, and deep knowledge into every piece of adjoining or integrating technology. Placing high barriers to entry on new technologies—time-consuming qualifications or poorly-delivered, expensive courses—contributes to the self-imposed talent shortage.

How Manufacturers Can Avoid Data Silos

The first and most important step you can take to break down silos is to develop policies for governing the data. Data governance helps to ensure that everyone in a factory understands how the data should be used, accessed, and shared. Having these policies in place will help prevent silos from forming in the first place. According to Gartner data, 87 percent of manufacturers have minimal business intelligence and analytics expertise. The research found these firms less likely to have a robust data governance strategy and more prone to data silos. Data governance efforts that improve synergy and maximize data effectiveness can help manufacturing companies reduce data silos. ... Another way to break down data silos is to cultivate a culture of collaboration. Encourage employees to share information and knowledge across departments. When everyone is working together, it will be easier to avoid duplication of effort and wasted time. To break down data silos, manufacturers should move to a culture that encourages collaboration and communication from the top down.

Top 7 metaverse tech strategy do's and don'ts

Like any other technology project, a metaverse project should support overall business strategy. Although the metaverse is generating a lot of buzz right now, it is only a tool, said Valentin Cogels, expert partner and head of EMEA product and experience innovation at Bain & Company. "I don't think that anyone should think in terms of metaverse strategy; they should think about a customer strategy and then think about what tools they should use," Cogels said. "If the metaverse is one tool they should consider, that's fine." Approaching with a business goals-first approach also helps to refine the available choices, which leaders can then use to build out use cases. Serving the business goals and customers you already have is critical, said Edward Wagoner, CIO of digital at JLL Technologies, the property technology division of commercial real estate services company JLL Inc., headquartered in Chicago. "When you take that approach, it makes it a lot easier to think how [the products and services you deliver] would change if [you] could make it an immersive experience," he said.

Digital begins in the boardroom

Boards need to guard against the default of having a “technology expert” that everyone turns to whenever a digital-related issue comes onto the agenda. Rather than being a collection of individual experts, everyone on a board should have a good strategic understanding of all important areas of business – finance, sales and marketing, customer, supply chain, digital. The best boards are a group of generalists – each with certain specialisms – who can discuss issues widely and interactively, not a series of experts who take the floor in turn while everyone else listens passively. There is much that can be done to raise levels of digital awareness among executives and non-executives. Training courses, webinars, self-learning online – all these should be on the agenda. But one of the most effective ways is having experts, whether internal or external, come to board meetings to run insight sessions on key topics. For some specialist committees, such as the audit and/or risk committees, bringing in outside consultants – on cyber security, for example – is another important feature.

4 reasons diverse engineering teams drive innovation

Diverse teams can also help prevent embarrassing and troubling situations and outcomes. Many companies these days are keen to infuse their products and platforms with artificial intelligence. But as we’ve seen, AI can go terribly wrong if a diverse group of people doesn’t curate and label the training datasets. A diverse team of data scientists can recognize biased datasets and take steps to correct them before people are harmed. Bias is a challenge that applies to all technology. If a specific class of people – whether it’s white men, Asian women, LGBTQ+ people, or other – is solely responsible for developing a technology or a solution, they will likely build to their own experiences. But what if that technology is meant for a broader population? Certainly, people who have not been historically under-represented in technology are also important, but the intersection of perspectives is critical. A diverse group of developers will ensure you don’t miss critical elements. My team once developed a website for a client, for example, and we were pleased and proud of our work. But when a colleague with low vision tested it, we realized it was problematic.

Bringing Shadow IT Into the Light

IT teams are understaffed and overwhelmed after the sharp increase in support demands caused by the pandemic, says Rich Waldron, CEO, and co-founder of Tray.io, a low-code automation company. “Research suggests the average IT team has a project backlog of 3-12 months, a significant challenge as IT also faces renewed demands for strategic projects such as digital transformation and improved information security,” Waldron says. There’s also the matter of employee retention during the Great Resignation hinging in part on the quality of the tech on the job. “Data shows that 42% of millennials are more likely to quit their jobs if the technology is sub-par,” says Uri Haramati, co-founder and CEO at Torii, a SaaS management provider. “Shadow IT also removes some burden from the IT department. Since employees often know what tools are best for their particular jobs, IT doesn’t have to devote as much time searching for and evaluating apps, or even purchasing them,” Haramati adds. In an age when speed, innovation and agility are essential, locking everything down instead just isn’t going to cut it. For better or worse shadow IT is here to stay.

Log4j Attack Surface Remains Massive

"There are probably a lot of servers running these applications on internal networks and hence not visible publicly through Shodan," Perkal says. "We must assume that there are also proprietary applications as well as commercial products still running vulnerable versions of Log4j." Significantly, all the exposed open source components contained a significant number of additional vulnerabilities that were unrelated to Log4j. On average, half of the vulnerabilities were disclosed prior to 2020 but were still present in the "latest" version of the open source components, he says. Rezilion's analysis showed that in many cases when open source components were patched, it took more than 100 days for the patched version to become available via platforms like Docker Hub. Nicolai Thorndahl, head of professional services at Logpoint, says flaw detection continues to be a challenge for many organizations because while Log4j is used for logging in many applications, the providers of software don't always disclose its presence in software notes. 

Quote for the day:

"Go as far as you can see; when you get there, you'll be able to see farther." -- J. P. Morgan

Daily Tech Digest - April 26, 2022

The emerging risks of open source

Many enterprises have sought to make their open source lives easier by buying into managed services. It’s a great short-term fix, but it doesn’t solve the long-term issue of sustainability. No, the cloud hyperscalers aren’t strip miners, nefariously preying on the code of unsuspecting developers. But too often some teams fail to plan to contribute back to the projects upon which they depend. I stress some, as this tends to not be a corporation-wide issue, no matter the vendor. I’ve detailed this previously. Regardless, the companies offering these managed services tend to not have any control over the projects’ road maps. That’s not great for enterprises that want to control risk. Google is a notable exception—it tends to contribute a lot to key projects. Nor can they necessarily contribute directly to projects. As Mugrage indicates, for companies like Netflix or Facebook (Meta) that open source big projects, these “open source releases are almost a matter of employer branding—a way to show off their engineering chops to potential employees,” which means “you’re likely to have very little sway over future developments.” 

How to model uncertainty with Dempster-Shafer’s theory?

One of the main advantages of this theory is that we can utilize it for generating a degree of belief by taking all the evidence into account. This evidence can be obtained from different sources. The degree of belief using this theory can be calculated by a mathematical function called the belief function. We can also think of this theory as a generalization of the Bayesian theory of subjective probability. While talking about the degree of belief in some cases we find them as the property of probability and in some cases, they are not mathematical. Using this theory we can make answers to the questions that have been generated using the probability theory. This theory mainly consists of two fundamentals: Degree of belief and plausibility. We can understand this theory using some examples. Let’s say we have a person diagnosed with covid-19 symptoms and have a belief of 0.5 for a proposition that the person is suffering from covid-19. This will mean that we have evidence that makes us think strongly that the person is suffering from covid(a proposition is true) with a confidence of 0.5. However, there is a contradiction that a person is not suffering from covid with a confidence of 0.2.

The Other AI: Augmented Intelligence

With a clear view of the benefits augmented intelligence delivered by AR can provide, you may be excited to get started within your enterprise but unsure of where to begin. First, it's important to start by speaking to your field technicians and service agents to gauge their interest or any potential aversion to implementing the technology into their workspace. New technology can be intimidating to field service technicians who are used to completing tasks a certain way. Helping them to understand how the technology can enhance their jobs and make service experiences less challenging and more engaging will be key. Next, consider which devices are needed to implement the augmented intelligence platform. At a basic level, a smartphone or tablet is needed. Hands-free wearable glasses make it easier for technicians to accomplish tasks in the field and on the factory floor. Drone support goes even further with AR visual awareness and graphical guidance not previously available. Finally, you'll want to confirm the bandwidth and connectivity requirements of the augmented intelligence AR platform and associated devices to ensure your field service technicians are set up for success.

Writing Code Is One Thing, Learning to Be a Software Engineer Is Another

Software developers are always students of software development and whenever you think you know what you are doing, it will punch you in the face. Good developers are humble because software development crushes overconfidence with embarrassing mistakes. You cannot avoid mistakes, problems and disasters. Therefore, you need to be humble to acknowledge mistakes and need a team to help you find and fix them. When you start as a developer, you focus on creating code to meet the requirements. I used to think being a developer was just writing code. Software development has many other aspects to it, from design, architecture, unit testing to DevOps and ALM. Gathering requirements and clarifying assumption. There are many best practices such as SOLID principles, DRY (Don’t repeat yourself), KISS, and others. The best practices and fundamental skills have long-term benefits. This makes them hard for junior developers to understand because there is no initial benefit. Well-named code, designed to be easily tested, isn’t first draft code. It does more than work. It’s built to be easy to read, understand and change.

AI Set to Disrupt Traditional Data Management Practices

“They often don’t have the skill sets, or their organizations don’t put in place processes and tools and practices to really manage data management for AI specifically,” says Sallam. “So data-centric AI has the potential to disrupt what has been traditional data management practices as well as prevalent model-centric data science by making sure that AI-specific considerations like data bias, labeling, drift, are all in place in a consistent manner to improve the quality of models on an ongoing basis.” Are tools under development to address this need, or are organizations investing in solutions for it? Sallam says that some of the other trends on the list will contribute to improving data management around AI. Specifically, to address this gap, leading organizations are disrupting data management for AI by building out data fabrics on active metadata and investing in things like AI governance, she said. This data-centric AI trend is one of several Gartner highlighted in its report for 2022 and grouped with a few others under the title of activating dynamism and diversity. 

Growing prospects for edge datacentres

Edge operations require user organisations and suppliers to think beyond infrastructure and architectural needs. New automation and orchestration challenges will arise, often across transactional boundaries and occurring between different companies and industries, rather than just different parts of the network. They must also think about ownership of the software and infrastructure stack and the likely path of service engagement – be that through a telecoms operator, hyperscale public cloud provider or others. Providers of edge operational services also need to decide how they support multiple customers according to their individual needs. This will be especially necessary for applying operational-specific AI algorithms, and may result in multi-layered partner offerings. All this will require organisations to think more carefully about how they extend their datacentre operations to enable greater levels of edge processing, work with cloud providers or hook into another provider’s edge datacentre network. The biggest drivers for edge datacentres are coming from industry sectors where edge operations are already well established. 

The Metaverse needs to keep an eye on privacy to avoid Meta’s mistakes

Metaverse avatars are a conglomeration of all issues relating to privacy in the digital realm. As a user’s gateway to all Metaverse interactions, they can also offer platforms a lot of personal data to collect, especially if their tech stack involves biometric data, like tracking users’ facial features and expressions for the avatar’s own emotes. The risk of someone hacking biometric data is far scarier than hacking shopping preferences. Biometrics are often used as an extra security precaution, such as when you authorize payment on your phone using your fingerprint. Imagine someone stealing your fingerprints and draining your card with a bunch of transfers. Such breaches are not unheard of: In 2019, hackers got their hands on the biometric data of 28 million people. It’s scary to think about how traditional digital marketing might look in the Metaverse. Have you ever shopped for shoes online and then suddenly noticed your Facebook is filled with ads for similar footwear? That’s a result of advertisers using both cookies and your IP address to personalize your ads. 

The Most In-Demand Cyber Skill for 2022

Just when everybody hoped that the security environment could not be more challenging, recent world events have created a further substantial uptick in cyber-attacks. This has also increased the sense that maybe we should all care more about the security of everything we ever purchased and placed in the cloud. Not so much buyer’s remorse as a penitent desire to security upcycle anything in the cloud that might be more critical to the organization once the current threat landscape is taken into consideration. Zero trust, extended detection and threat response (XDR), SASE (secure access service edge) – almost all the hottest topics are about how to take the security standards that were (once-upon-a-time) applied as standard to traditional networks and *seamlessly* implement them across cloudenvironments. The number one position for cloud computing makes sense. It reflects the growing concern about cloud security and the gradual evolution of the requirement to ensure that each organization has a consistent security architecture that extends over and includes any important cloud solutions and services in use.

How SaaS Models Changed Content Creation

Content creation used to be a difficult, arduous and manual process. Creative visions were consistently hampered by workflow and technological limitations. The dilemmas of our past were based on technical feasibility. Now, those restrictions have become completely unshackled. It’s no longer a question of what’s possible to do, but rather what you want to do, and which path do you take to get there? SaaS evened the playing field. To understand what is possible now and what is yet to come, it’s important to distinguish between two areas within the umbrella-use of SaaS. First, we have true software as a service, which is software that runs on the internet and is accessed in the cloud. Google Workspace is an example of this, allowing users to create spreadsheets, documents and presentations that are stored on Google’s servers. (Disclosure: My company has a partnership with Google.) The software runs as a service for you to connect to from any device and edit your documents anywhere. It's persistent regardless of the computer you’re on, and documents can be edited by multiple users, even simultaneously.

Data parallelism vs. model parallelism – How do they differ in distributed training?

In data parallelism, the dataset is split into ‘N’ parts, where ‘N’ is the number of GPUs. These parts are then assigned to parallel computational machines. Post that, gradients are calculated for each copy of the model, after which all the models exchange the gradients. In the end, the values of these gradients are averaged. For every GPU or node, the same parameters are used for the forward propagation. A small batch of data is sent to every node, and the gradient is computed normally and sent back to the main node. There are two strategies using which distributed training is practised called synchronous and asynchronous. ... In model parallelism, every model is partitioned into ‘N’ parts, just like data parallelism, where ‘N’ is the number of GPUs. Each model is then placed on an individual GPU. The batch of GPUs is then calculated sequentially in this manner, starting with GPU#0, GPU#1 and continuing until GPU#N. This is forward propagation. Backward propagation on the other end begins with the reverse, GPU#N and ends at GPU#0. Model parallelism has some obvious benefits. It can be used to train a model such that it does not fit into just a single GPU.

Quote for the day:

"At the heart of great leadership is a curious mind, heart, and spirit." -- Chip Conley

Daily Tech Digest - April 25, 2022

How to avoid compliance leader burnout

Just as a CISO will be held responsible for a security breach, even if the incident was unforeseeable, a compliance leader is considered responsible for all aspects of compliance: getting the appropriate certifications and reports, making sure the company passes its audits, etc. But if traditional methods of compliance are used, the compliance leader has no actual oversight on whether those controls are running. For example, the compliance team may set up controls over user access, but if one control owner forgets to run their control, the resulting failure will likely be blamed on the compliance leader. ... Data-oriented compliance that automatically pulls data from primary sources can sift through a vast volume of data and give an early signal if it senses a problem that needs to be looked at by a security person or engineer. This makes it less likely that a compliance leader will be blindsided by a long-running failure to implement a control. When a control is built into processes that a department is already running, it’s less likely to be overlooked by that department—since the control is part of a process that’s operationally important to the company.

Simplify Cloud Deployment Through Teamwork, Strategy

Liu suggests that when striving for simplification, IT organizations should recognize that simplification of architectures is complex and can be disruptive. That means it’s important to identify the most opportune time that works for the whole organization. “When simplifying, don’t just think about components like network switches or storage,” she says. “If you focus on moving or simplifying one component, your simplification can invite a lot more complexity. Think about simplifying whole infrastructure solutions. Align at the solution- or service-level first.” Stuhlmuller advises that enterprise cloud teams should educate themselves on how networking is done, not only in their primary cloud, but in all public clouds. This allows them to develop a multi-cloud network architecture that will keep them from having to re-architect when – inevitably -- the day comes when the business requires support for a second or third public cloud provider. “Cloud teams supporting enterprise scale businesses discover that building with basic constructs quickly increases the complexity and requires resource intensive manual configuration,” he says.

Most Email Security Approaches Fail to Block Common Threats

Digging into where email defense breaks down, the firms found that, surprisingly, use of email client plug-ins for users to report suspicious messages continues to increase. Half of organizations are now using an automated email client plug-in for users to report suspicious email messages for analysis by trained security professionals, up from 37 percent in a 2019 survey. Security operations center analysts, email administrators, and an email security vendor or service provider are the groups most commonly handling these reports, although 78 percent of organizations notify two or more groups. Also, user training on email threats is now offered in most companies, the survey found: More than 99 percent of organizations offer training at least annually, and one in seven organizations offer email security training monthly or more frequently. “Training more frequently reduces a range of threat markers Among organizations offering training every 90 days or more frequently, the likelihood of employees falling for a phishing, BEC or ransomware threat is less than organizations only training once or twice a year,” according to the report.

Why private edge networks are gaining popularity

For edge computing to gain large-scale adoption across enterprises, APIs need to provide an abstraction layer that alleviates the intensive work of having developers write code to communicate with each system in a tech stack. Abstraction layers save developers’ time and streamline new app development. Alef’s approach looks at how they can capitalize on stable APIs to protect developers from dealing with complex tech stacks in getting work done. Edge device processors are getting more intelligent. The rapid gains in chip processor architectures make it possible to complete data capture, analytics and aggregated at the endpoint first before sending the result to cloud databases. In addition, endpoint devices’ growing intelligence makes it possible to offload more tasks, freeing up network latency in the process. ... All businesses need real-time data to grow. Small gains in visibility and control across an enterprise can deliver large cost savings and revenue gains. It’s because real-time data is very good at helping to identify gaps in cost, customer, revenue and service processes.

Deep Science: AI simulates economies and predicts which startups receive funding

Applying AI to due diligence is nothing new. Correlation Ventures, EQT Ventures and Signalfire are among the firms currently using algorithms to inform their investments. Gartner predicts that 75% of VCs will use AI to make investment decisions by 2025, up from less than 5% today. But while some see the value in the technology, dangers lurk beneath the surface. In 2020, Harvard Business Review (HBR) found that an investment algorithm outperformed novice investors but exhibited biases, for example frequently selecting white and male entrepreneurs. HBR noted that this reflects the real world, highlighting AI’s tendency to amplify existing prejudices. In more encouraging news, scientists at MIT, alongside researchers at Cornell and Microsoft, claim to have developed a computer vision algorithm — STEGO — that can identify images down to the individual pixel. While this might not sound significant, it’s a vast improvement over the conventional method of “teaching” an algorithm to spot and classify objects in pictures and videos.

Stack Overflow Exec Shares Lessons from a Self-Taught Coder

As a self-taught developer, Chan describes that life as an entry-level software engineer as “a really big surprise and shock.” Especially given his past experiences in the world of programmer job interviews. He was baffled by his previous experiences interviewing with large companies, finding himself “failing miserably,” he told the podcast audience. Tech interviews, he said, were “where it’s like, ‘I don’t even know what a red-black tree is, so please don’t ask me more interview questions about that kind of thing!'” By contrast, he’d known of Stack Overflow for years, and considered it the home of “some of the best engineers that I could possibly think of.” ... Chan recalled learning what all new managers learn: while you may have been good at your old position, “once you become a manager, the skillset is completely different.” Or, in his case, “You’re no longer working with computers and with code anymore. You’re working with people, right?” There were more conversations, and listening to people — but also a shift in thought. “This is not about code so much anymore,” he said. 

Founders’ Guide To Embedding Corporate Governance In Your Startup

It would do good for founders to have some role models when it comes to governance and read about the practices and philosophies deployed by them. However, they may have to look beyond the startup universe for that because good governance is usually a sustained phenomenon. Companies that have been in business for decades could only qualify for the same. In my view, the Tata Group in general but specifically under the stewardship of JRD Tata has been the epitome of good governance. Some leading IT services companies like Infosys could also be studied. One does not have to look far and toward the West for such role models. Founders will do well to remember that getting an up round (after passing through diligence) is not a validation that they are doing everything right. Many times, investments happen due to prevailing market sentiment and liquidity. This happens in spaces that are hot and market tailwinds compel investors to close transactions faster. However, such times don’t last forever. Often, when a fastidious investor comes in to write a big cheque, such transgressions come to light.

Improving Your Estimation Skills by Playing a Planning Game

When we look at a large, complex task and estimate how long it will take us to complete, we mentally break down the large task into smaller tasks. We then construct a mental story of how we will complete each smaller task. We identify the sequential relationship between tasks, their interconnectedness, and their prerequisites. We then integrate them into a connected narrative of how we will complete the large task. All of these activities are good, and indeed essential for completing any large task. However, by constructing this mental story, we slip out of estimation mode and into planning mode. This means that we focus upon the how-to’s, rather than thinking back to past experiences, of potential impediments and how they may extend the task duration. Planning is a bit like software development, whilst estimation is a bit like software testing. In development, we are trying to get something to work. So, if our initial approach is unsuccessful, we modify it or try something else. Once we have got it to work, we are generally satisfied and move onto solving the next problem.

How to be a smart contrarian in IT

Start with the end user or the most important stakeholders: Do they find the end results intriguing? Have you built a proof-of-concept solution that tests your hypotheses? Can they get some value and provide you with quality feedback from a minimal viable product (MVP)? Don’t over-engineer a solution to a problem that nobody cares about. Let your customers lead you to what matters and do just enough engineering from there. You’ll still need to add standard enterprise features such as security, user experience, and scale, but the goal is to add them to a product your client wants and values. ... Before you try to solve a problem, find out if anyone on your team or at your company has already solved that problem or has experience with it. Explore wikis and forums to see if solutions have been documented privately or publicly. Too often, we fail to ask questions because we don’t want to appear uninformed or unintelligent. Keep in mind that most people enjoy being asked for advice and would welcome the opportunity to answer a question, especially early in the process when they can help you save time and effort. 

Get ready for your evil twin

Accurately replicating the look and sound of a person in the metaverse is often referred to as creating a “digital twin.” Earlier this year, Jensen Haung, the CEO of NVIDIA gave a keynote address using a cartoonish digital twin. He stated that the fidelity will rapidly advance in the coming years as well as the ability for AI engines to autonomously control your avatar so you can be in multiple places at once. Yes, digital twins are coming. Which is why we need to prepare for what I call “evil twins” – accurate virtual replicas of the look, sound, and mannerisms of you (or people you know and trust) that are used against you for fraudulent purposes. This form of identity theft will happen in the metaverse, as it’s a straightforward amalgamation of current technologies developed for deep-fakes, voice emulation, digital-twinning, and AI driven avatars. And the swindlers may get quite elaborate. According to Bell, bad actors could lure you into a fake virtual bank, complete with a fraudulent teller that asks you for your information. Or fraudsters bent on corporate espionage could invite you into a fake meeting in a conference room that looks just like the virtual conference room you always use.

Quote for the day:

"The signs of outstanding leadership are found among the followers." -- Max DePree